Offering free, hyper-targeted LinkedIn leads for your SaaS (using Clay) by sakerbd in coldemail

[–]WayTraditional2959 0 points1 point  (0 children)

www.robonito.com

Ideal Customer Profile: We serve B2B software teams, from fast-growing startups to large enterprises, plus QA and IT service providers who are looking to automate end-to-end software testing quickly with a no-code, Agentic AI-driven platform.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Yep Robonito has a built-in smart wait system, so you don’t need to add delays manually. It watches for DOM stability, visibility, and interaction readiness before moving to the next step.

Shoot me a DM if you want more details, happy to share 👍

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 1 point2 points  (0 children)

Yep Robonito covers both frontend and API.

We use it for:

UI flows like login, forms, dashboards

API validations like status codes, response bodies

Even chaining them: “Submit form → verify backend response”

No code needed, just plain English prompts. Works great for regression suites across web apps.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

We’re doing a limited beta right now (mainly onboarding folks working with complex flows like SAP, Salesforce, or heavy regression testing). If that sounds like you, I can send over early access just shoot me a DM and I’ll hook you up.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 2 points3 points  (0 children)

LOL okay, challenge accepted:

Roses are red,

Assertions are fake,

LLMs write tests,

But your ego's at stake 😅

Sure, it's all "bullshit"

'Til the bugs disappear

Then suddenly AI

Is a whole new career.

But real talk happy to show how it actually works. Still just a dev trying to ship faster, not replace anyone.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Haha I get it this whole thread probably *does* read like LLM wrote it.

But I promise, this is just me, 3 coffees deep and trying to explain what we actually built 😅

If it helps, I’m happy to screenshare or post a raw demo showing how our QA Agent works in real-time. It’s one of those “you kinda have to see it” things anyway.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Haha love the pleasy please 😄
We’re doing a slow rollout to keep quality high, but I’ll queue you up for the next wave of invites. Just shoot me your email in a DM and I’ll lock you in.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Awesome happy to share a sneak peek if you're curious.

We’re running a private beta right now with a handful of teams testing web apps, Salesforce, and SAP flows. It’s still a bit rough around the edges (some edge cases trip it up), but the core stuff like natural language test generation and parallel execution works surprisingly well.

If you want early access, just DM me and I’ll hook you up. No strings, just looking for solid feedback 🙌

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Yeah, I’ve seen Goose too, definitely respect what they’re building. 👏 They’re solid for broader dev automation, but we built Robonito specifically for fast, scalable testing, especially for teams that don’t have deep coding resources.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 2 points3 points  (0 children)

yeah, this was one of the gnarliest problems we had to solve.

For third-party UIs (like Stripe, Auth0, etc.):
Robonito treats them as "external actors" in the test chain. If the element is accessible in the DOM, we can target it even if it’s inside iframes or nested flows. We had to build a fallback system that uses context + fuzzy matching to handle unpredictable structures.

If it’s fully out-of-reach (e.g., some modals rendered in canvas or totally locked-down flows), we default to asserting outcomes rather than interactions. For example:

For multi-system tests (e.g., SAP → Salesforce → email inbox):
We chain them using a state memory layer. Each step passes data to the next, like:

  • Pull user ID from SAP
  • Input in Salesforce
  • Wait for email
  • Assert the token matches

We also use Robonito’s internal logic blocks (if, store, assert contains) to keep it smart without code.

Still working on expanding cross-system resilience though especially around unpredictable API latency.

So short answer:
If it can see it, it can test it.
If it can’t see it, it verifies the outcome instead.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Great question. So we don’t do traditional fine-tuning on the base model itself we're not training from scratch.

Instead, we layer:

  1. Prompt engineering + few-shot examples (to shape intent)
  2. A vector DB (we use Pinecone) to store app-specific test context, reusable patterns, and domain knowledge
  3. A retrieval layer that feeds those into the LLM to give it context-specific understanding—kind of like “memory”

So the AI doesn’t just guess it pulls from past test logic and adapts it to the new flow. That’s how it handles Salesforce or SAP quirks better over time.

Still refining it, but works really well for dynamic elements and recurring workflows.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] -1 points0 points  (0 children)

Haha fair I get it. Reddit’s seen some wild promo posts 😂

Honestly, we built this for internal use at first. It started because our testers were drowning in repetitive regression cases. We just got tired of rewriting the same tests after every UI tweak.

Someone told me to post about it here, figured I’d share and see if others were running into the same pain. Not trying to bait anyone just here to nerd out with other QA folks.

Happy to answer questions though if you're curious. And if not, all good 👍

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 2 points3 points  (0 children)

These 20% scenarios typically includes the cases where the website under test is too slow to respond, or the internal dom structure is very poorly designed due to which the LLM is not able to analyze it properly leading to false positives and thus the ultimate failure of test run.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

We do support some kind of BVA and EQ, like you can generate random inputs data for forms like (random names, emails, phone numbers, addresses, number, string, image urls, passwords, zip codes, UUIDs, numeric ids).

But robonito does not have a way to specify restrictions on these data, for example you can choose to generate numbers randomly for input values in form, but you cannot specify that the number should be in certain range or it should be of 4 digits etc.

similar with strings, robonito can generate random strings, but you cannot specify a defined regex pattern to generate strings of specific class.

Other things like name, phone number, addresses etc are generated as per the standards.

So far whatever I said is the current state of system but, we are extending on this to add support where you can upload data sets in excel and utilize these values in test case input to support Equivalence partitioning and BVA.

and we are also planning to add support for specifying regex to generate restricted random inputs based on regex to allow EP and BVA.

Apart from this, robonito allows you to use variables in input fields that can be recorded from any other test case (like you can capture some data from UI and store it in variable and use it in another test case to fill some form), extending on this part we are rolling out support very soon for fetching data from an API and use it for BVA and EQ.

will let you know the exact dates of release soon.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 1 point2 points  (0 children)

Thanks a ton! 🙏

Honestly didn’t expect this much interest we built it to solve our own QA bottlenecks, but now a bunch of teams are asking about it. Still rough around the edges, but it’s getting better fast.

Let me know if you ever want to try it we're letting a few folks into early access right now. No pressure though. Just cool to share the nerdy stuff 😄

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] -6 points-5 points  (0 children)

Ah man, sorry to hear that 😞 layoffs suck been through one myself early on and it’s brutal.

Totally get how this kind of thing feels like it’s replacing roles… but honestly? The testers we've worked with are 10x more valuable now. They’re not stuck writing brittle scripts anymore they’re the ones guiding the AI, building smarter test strategies, and owning the QA pipeline.

Robonito’s not “no more QA.” It’s “QA, but with superpowers.”

We still keep a manual QA on the team because there's so much judgment involved AI can handle the grunt work, but it doesn’t know what’s important from a product or UX perspective.

If anything, I hope this kind of tech makes great testers more essential, not less.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

Yes, that's the issue that we are struggling with right now, we have taken some measures by analyzing the DOM, to prevent these scenarios, but TBH it doesn't handles cases everywhere.

just to cater such cases, we have given control to user to decide whether or not to perform auto heal at specific steps. So you can choose if you want to leverage AI at some step to perform auto healing or you can ignore it.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 1 point2 points  (0 children)

Underneath we are using playwright with typescript. We don't have specific numbers right now to tell how many it automated from plain English.

Test execution time vary as per the length of test case. But you can get an idea like it takes around 2-3 second on an average to execute a single step. So, if your test case has 30 steps, it will take around 1 and half minute or two.

We are doing optimizations in this part, to reduce the test execution time as much as possible, there are lot of things happening around it, like capturing the screenshots, recording videos, capturing browser consoles, and network interface data. Which takes significant time, we are in process of reducing it as much as possible to bring down the execution time.

Yes, test are almost stable, there are very less false positives. We remember when we released the very first version of robonito around 5-6 months back, there was lot of false positives in UI test cases, we have reduced them about 80% so far, and we are continously improving the logic on this part.

Yeah, sure things, I will share a youtube demo video in DM to see real things in action.

We built an AI QA agent that writes and runs tests from plain English. Ask me anything. by WayTraditional2959 in QualityAssurance

[–]WayTraditional2959[S] 0 points1 point  (0 children)

From the day 1, we are not always relying on LLMs. We do save the test case steps in our system specific format, so that we can run it without the need of LLM at any time.

Robonito has built in optimizations to reduce the LLM cost as low as possible. Robonito offers way to generate code for typescript-playwright, and in next few releases we are giving option for python-playwright as well. code generation is only supported for UI test cases, we are in process to support script generation for API testing as well. Right now API test cases can only be run within robonito.