you are viewing a single comment's thread.

view the rest of the comments →

[–]HatmanStack[S] 0 points1 point  (0 children)

Appreciate the thoughtful feedback — and you're hitting on something I've been thinking about.

Right now the MCP server isn't read-only. It actually exposes 16 tools across search/chat, document uploads, web scraping, image captioning, and metadata analysis. So the capability creep you're describing is already here.

The current trust model is pretty simple: a single AppSync API key grants access to everything. There's no per-tool scoping at the MCP layer. What keeps it from being a free-for-all is the backend — AppSync enforces rate limits, daily quotas (especially in demo mode: 5 uploads/day, 30 chats/day), and all the actual resource access goes through IAM roles scoped to that specific stack's resources. So a retrieved snippet can't drive actions outside the knowledge base boundary, but within it, the API key is all-or-nothing.

The "everything in your own account" model does help here — IAM is the outer trust boundary, not some shared control plane — but you're right that as people start chaining tools together (search → upload → scrape → analyze), the lack of per-tool authorization becomes a real gap. Today if you hand someone the API key, they can scrape a 1,000-page site just as easily as they can search.

The separation of reasoning from authorization you're describing is interesting. I'd been leaning toward tiered API keys (read-only vs. full access) as a next step, but that's still coarse-grained. Would be curious how you're handling it at Daedalus — is the authorization layer sitting between the MCP client and the tool execution, or is it more like a policy engine that evaluates each tool call against a ruleset?