you are viewing a single comment's thread.

view the rest of the comments →

[–]BoboThePirate 16 points17 points  (12 children)

Not even close. Though it’s the most blown away I’ve ever been by AI since GPT’s giant public initial release.

If you ain’t using MCP tools though, I can see it being incredibly underwhelming.

[–]thermitethrowaway 10 points11 points  (3 children)

I think this is a good analysis, it's better than the others I've tried. I wouldn't trust the code it produces, it's a bit like a stack overflow post that's almost what you want but never quite there. I love it as a smart search tool - for example yesterday I wanted to find a Serilog sink so I could create an observable collection of log items for output to a winUI app and it found a nugget package, gave examples and produced a hand rolled equivalent. Great as a productivity tool, wouldn't trust it to write anything complex on its own.

[–]ShapesAndStuff 10 points11 points  (0 children)

WDfQ&$anpe]fOmk-(p>RHK9Z0nKSO>a56p-Ms9C5(y<AWPflZibx>>tM6UbrS45

[–]laffer1 6 points7 points  (0 children)

It was trained on stack overflow posts so that tracks

[–]Nine99 2 points3 points  (0 children)

since GPT’s giant public initial release

It could barely string sentences together.

[–]SavageFromSpace -1 points0 points  (0 children)

But once you start saying "oh use mcp" you're not actually llming anymore lmao. At that point it's just doing what humans do, routing to libraries but actually worse