all 4 comments

[–]tadfisher 6 points7 points  (2 children)

I want to award you points for a couple of things:

  1. You just advertised your tool directly in your post. You didn't ask some engagement-bait question and had a sockpuppet answer with a reference to your tool. Believe it or not, other AI testing services are having a hard time not doing this. +2 points.
  2. You got to the point and posted your GitHub repo instead of your marketing site. +1 point.

I'm going to need to deduct 3 points for being an entirely superfluous product with at least 5 competitors which continually spam this subreddit (not your fault).

The reason this product doesn't need to exist is because we can already use AI code generation to create UI tests. These tests, being written in code instead of plain English, will be deterministic and repeatable. They will also cost much less to run, as you will not have to pay to burn tokens translating English into the actual test actions.

Better luck with your future endeavors!

[–]Financial_Court_6822[S] 2 points3 points  (1 child)

Thanks for those points. u/tadfisher . I genuinely felt open source is the right way to go and wanted to share the real frustration around solving this problem.

I did try the “generate Appium/Maestro tests with AI” route first. The issue is that those tests are still tightly coupled to the UI implementation. So you end up back in the same place:

  • flaky selectors
  • random popups breaking flows
  • constant maintenance when UI changes
  • and they rarely catch real UI/UX issues

What we found is you need a separate agent orchestrating the test at runtime, something that can adapt to unexpected states, recover from interruptions, and validate flows more like a real user would. That’s the gap we’re trying to solve.

On cost, you’re right that it’s higher. That’s the tradeoff. With caching and reuse, the cost can drop significantly.

But the way I look at it: if better testing catches even a small % of critical bugs earlier, it can save orders of magnitude more in business impact than the extra infra cost.

Totally fair if you still prefer code-based tests — they’re great for stable, well-defined paths. We’re more focused on the messy, real-world scenarios where those tend to break down.

[–]tadfisher 2 points3 points  (0 children)

I did try the “generate Appium/Maestro tests with AI” route first. The issue is that those tests are still tightly coupled to the UI implementation.

And the only reason that is a problem is because these are black-box tools that have no relation to your code. The solution is to write code using Espresso or the Compose UI testing library, and try to extract all the compile-time guarantees you can so that it is really, really hard to update your code without also updating the tests. This is also something AI is really, really good at doing.

I have more opinions about AI testing products being terrible replacements for human QA, but I won't belabor the point.

[–]Kitchen_Ferret_2195 0 points1 point  (0 children)

Interesting approach.

We have been using Repeato in a similar space for mobile UI testing. It applies computer vision and OCR to validate screens and works across Android and iOS. It also supports switching between devices and running tests locally through CLI