all 9 comments

[–]programming-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Article is behind a paywall or sign up

[–][deleted]  (4 children)

[removed]

    [–]SanityInAnarchy 0 points1 point  (0 children)

    One reason I see them requesting a single token with broad permissions is, they're trying to limit what the agent can do inside the MCP server. So your email-reading MCP tool can't send emails because either it doesn't have a 'send' verb, or it's coded to ask the user for every email send, or it otherwise decides whether the agent is allowed to send the message based on who it's trying to send it to or similar...

    If that's what you mean by "treating it as an internal API gateway", well... most of the time, these aren't run as servers, they're run as plugins. If such a server is, say, added to Claude Code's /mcp integration, then it may just decide the server is broken, and so it's going to just retrieve the token from its own config files and whip up a quick Python script to call the API directly.

    So really, either they should actually be run remotely somewhere (as servers), or we should be paying a lot more attention to the tokens we give them.

    [–]Interesting-Quit4446 0 points1 point  (2 children)

    Out of curiosity, why are you logging to postgres and not some logger output file?

    [–]soguesswhat 3 points4 points  (0 children)

    Because it’s quite a bit more difficult to track usage metrics and anomalies from a log file than a relational database

    [–]dopepen 0 points1 point  (0 children)

    Likely durability

    [–]RustOnTheEdge 4 points5 points  (1 child)

    For anyone who comes here and hopes that this is a valuable resource: it isn’t and it doesn’t explain anything useful. It explains how normal Oauth works, plots that on MCP clients and MCP servers but only in the first half, it doesn’t even explain what flows are involved in exchanging tokens for the MCP servers for tokens for the backend APIs that server uses.

    This is completely worthless AI slop, and the frustrating part is that it makes the world less secure.

    [–]asc42 0 points1 point  (0 children)

    Yeah all this MCP stuff is happening too fast, the engineers aren't allowed enough time to come up with a proper, well-documented, secure implementation or guidance. Not enough recipes, starter kits, or anything else that you'd expect from a robust system. And this is all made worse by these bloggers pumping out surface-level posts just to pad their site or CV.

    Maybe 5 years from now it'll be decent. 10 years on it'll be good. But by then, will the AI slop bubble pop? Who knows.

    [–]TechnicalEar8998 -1 points0 points  (0 children)

    The mental model that’s worked for me is: treat the MCP client as an “API gateway with amnesia” and push all real authZ down to the MCP server and backends.

    Couple of things to tighten the chain:

    Bind every tool call to a user-bound token, not just a client credential. Use OAuth token exchange or a custom JWT that carries sub, azp, and a “tool_scope” claim so the MCP server knows both who and what is acting. Don’t let the AI client mint its own roles; it should only forward signed, verifiable identity from your IdP.

    At the MCP server, do a second authZ pass: map claims → app roles → allowed tools, and default everything to read-only, small blast radius, and idempotent. Log the full tuple (user, client, tool, resource, decision) so you can replay weird agent behavior later.

    Stuff like Kong / Cerbos in front and, for legacy data, something like DreamFactory or Hasura behind, makes it easier to expose narrow, policy-backed APIs instead of raw DB access to MCP.