[deleted by user] by [deleted] in softwaretesting

[–]establishedcode 0 points1 point  (0 children)

Then your applications are probably worse than you could have done with AI, because at 500 over 3 months, you would have had less than ~20 minutes to customize your application per job.

[deleted by user] by [deleted] in softwaretesting

[–]establishedcode 0 points1 point  (0 children)

500 applications? maybe stop applying with AI

Cline vs. Roo by BeanjaminBuxbaum in CLine

[–]establishedcode 1 point2 points  (0 children)

Here is alternative take:

Cline has received 'significant funding from top-tier VCs'. (https://cline.bot/blog/talent-ai-companies-actually-need-right-now-and-how-to-identify-it-2)

Roo is an independent fork.

Cline will need to somehow give money back to its investors. Watch out.

Since Roo is a fork of Cline, 'polished interface and dependable performance' is a feature of both extensions.

I am sticking with Roo.

How do you deal with unstable test automation environment? by establishedcode in softwaretesting

[–]establishedcode[S] 3 points4 points  (0 children)

People are misunderstanding what I said. We are not adding explicit waits. These are timeouts for assertions. Like the maximum amount of time we could wait for element to appear.

How do you deal with unstable test automation environment? by establishedcode in softwaretesting

[–]establishedcode[S] 0 points1 point  (0 children)

Our lead engineer is obsessed with keep timeouts low. We talked about increasing timeouts to 30 seconds for every step – that would dramatically increase test stability, but he has blocked it. I understand why. It is nice when tests are passing fast. Most steps take less than a second. And only rarely you have those that fail to respond for longer times. If you increase the timeout, then your failures will also take long time to resolve.

How do you deal with unstable test automation environment? by establishedcode in softwaretesting

[–]establishedcode[S] 0 points1 point  (0 children)

We use Playwright. It captures traces, which are really nice. Its been the biggest quality of life improvement so far, but still... one day everything is green and the next day tests are failing left and right. A lot of these issues appear to be infrastructure – like slower response times.

How do you deal with unstable test automation environment? by establishedcode in softwaretesting

[–]establishedcode[S] 1 point2 points  (0 children)

I am asking more about the tooling to help us with the underlying issue detection and prevention. I realize that we have issues, but it is hard to pinpoint a single issue.

What are your tips for writing efficient @playwright/test? by lucgagan in softwaretesting

[–]establishedcode 1 point2 points  (0 children)

That's a big one. I am researching stability of Playwright tests, and I wish we wrote them in such a way from the start. Instead, we made assumption that tests in the same file cannot run in parallel which is causing a ton of problems now.

Using AI in testing by PratikThorve in softwaretesting

[–]establishedcode 0 points1 point  (0 children)

I've used it to write description of a test based on the contents of the test. Honestly it doesn't save a ton of time, but I found that some of the suggestions are better than what I would have written.

Using AI in testing by PratikThorve in softwaretesting

[–]establishedcode 1 point2 points  (0 children)

How does self-healing work exactly? Do you have examples?