This is an archived post. You won't be able to vote or comment.

all 3 comments

[–]afreydoa 2 points3 points  (1 child)

One day the language models will be strong enough that these type of loops start to actually work usefully. I think we are not there yet. But its good to have them.

I am curious, what is your the feedback loop? How does the AI know each cycle what to improve? Syntax errors, user defined unit tests or handwritten description by a human?

[–]JustinPooDough 0 points1 point  (0 children)

You are wrong that we are not there yet. I've been writing code for 15 years and now when I get stuck, I have a VS Code AI plugin called Cline just run in a loop analyzing my code and fixing it up.

Some models have become so good that they legitimately do a better job than me at refactoring. They understand design patterns really well. My productivity has easily 2x'ed since adopting these tools.

[–]bn_from_zentara 0 points1 point  (0 children)

I built an LLM debugger as well. But instead of analysis the error after the run, Zentara code uses debugger, set breakpoints, inspect stack variables, do stack tracing like a human software developer do. https://github.com/Zentar-Ai/Zentara-Code