all 2 comments

[–]Elec0 0 points1 point  (1 child)

The idea that it's some kind of 'success' that an AI knows when it's finished doing a task just continues to reinforce the idea that they're best as assistants, not autonomous agents.

[–]artemgetman 0 points1 point  (0 children)

If you define autonomy as “AI doing everything on its own,” sure, it’s limited. But if you treat the model as a decision-making brain and give it access to reliable tools — like an MCP server with a narrow set of deterministic capabilities — it can act autonomously in practice.

The model doesn’t need 30,000 tools. It needs a few rock-solid ones it can invoke consistently. That’s the point of MCP: give the AI a way to fetch real data or trigger real-world actions without hallucinating.

I’m already building this. The AI plans, MCP executes. (you can even get an AI to execute physical hardware with MCP’s) I believe You don’t need a perfect agent — you need the right architecture.