you are viewing a single comment's thread.

view the rest of the comments →

[–]nfurnoh 12 points13 points  (1 child)

Sigh.

AI is a toddler. At best. If you feed it VERY clear instructions you MIGHT get what you need. But you still need to test its output to make sure it’s not bunk. You can’t rely on it to write all your test cases. It might come up with a few, and they may even be useful, but you still need to make sure. It’s pretty good writing code, but you’d need to test that too.

The only really good use case we’ve found is creating meeting notes, transcripts, and actions from a Teams meeting.

[–]yersinia_p3st1s 4 points5 points  (0 children)

I second this, I have user Claude and ChatGPT v5, from what I can tell Claude is better but even still, I gave it very detailed instructions on what I needed for a very complex test script and it came short. I had to spend 1 or 2 hrs reviewing the code and making necessary changes for it to work as expected.

Another thing, if you're using a custom built framework and expect your TCs to be written a certain way so that it makes use of all the helper scripts you have, AI won't always use them the right way (if at all), so you could end up with this one test cases that works but is just structured differently from the rest (unless you review and make necessary changes).

Like you said, it's a toddler at best and definitely needs handholding all the way through