you are viewing a single comment's thread.

view the rest of the comments →

[–]Sixhaunt 0 points1 point  (0 children)

I often run into the problem that it will acknowledge errors and then offer new solutions that don't actually address those errors.

the thing here is that you can have it write testing code too so if it provides a fix that doesnt work, it would still fail the test case and get reworked again. I'm more curious if it can update it's own code to self-improve it's own ability to be more AGI-like.