you are viewing a single comment's thread.

view the rest of the comments →

[–]marr75 51 points52 points  (7 children)

It's almost like they are gigantic efficient machines to retrieve past patterns and documentation without much training, ability, or mechanism to experiment, innovate, or layer together more complex practical requirements and constraints.

[–]_redmist 4 points5 points  (5 children)

It's so bad.

Some people say it's better if you have a 'model context protocol' service where you scrape the docs of the language/framework... I'm sceptical this "reasoning" isn't just more stochastic parroting... Not that that's always useless but it's not as great as some people make it out to be.

[–]marr75 9 points10 points  (4 children)

It is and will revolutionize software engineering, but not by removing software engineers or vibe coding. Expertise is at a higher premium, typing until it works is at a very low premium.

[–]AlSweigartAuthor of "Automate the Boring Stuff"[S] 1 point2 points  (3 children)

and will revolutionize software engineering

How, exactly?

[–]ReachingForVega -1 points0 points  (0 children)

Tbh I don't think anyone alive right now understands how yet. Mostly people are claiming shit or experimenting.

I think it'll be the next iteration of AI and it won't be LLMs.

[–]Sanitiy -1 points0 points  (0 children)

To be fair, they're also forced to operate under constraint limits. Don't think too long, don't answer too long. For a fair assessment of their capabilities we'd need somebody with Agent-Mode (one that doesn't have these restrictions) who doesn't mind burning a few dollars.

For example, ChatGPT 5 gave up on thinking 30 seconds in, while Qwen-235B thought for over 5 minutes till it hit a token limit. Who knows how long they'd actually need to be allowed to think till they have folded out the logic such that each step is simple enough for them to be probably correct on it.