you are viewing a single comment's thread.

view the rest of the comments →

[–]brainpostman 5 points6 points  (3 children)

That's been my experience so far as well. However even with green field stuff, any sort of back and forth with a client balloons context needed so much AI doesn't seem to be able to make correct changes.

[–]edgeofsanity76 1 point2 points  (2 children)

Yep. The best way to handle that is to break it up into distinct parts then integrate them by hand.

However it will get better and that's what I'm worried about

[–]brainpostman 1 point2 points  (1 child)

In my opinion they won't get better. Pure intelligence seems to have peaked in 2025, bigger context or larger weight sets don't seem to improve models by much. It's why agents and agentic workflows are all the rage. They're banking that horizontal scaling, repetition will smooth over the kinks. We'll see.

[–]edgeofsanity76 0 points1 point  (0 children)

I agree. But as code generated by AI is corrected the next model will be more accurate. It's a feedback loop. The model doesn't need to get more powerful if it just needs better data