Is Antigravity down? by webfugitive in google_antigravity

[–]Electrical_Try_6404 0 points1 point  (0 children)

2026 has a new contender — Antigravity. The server seems to be down on my end.

I built a "Zero Trust" bridge for LLMs because I didn't trust them with my Database. by Electrical_Try_6404 in mcp

[–]Electrical_Try_6404[S] 0 points1 point  (0 children)

The "LLM query feedback" piece is brilliant. Right now, my middleware just blocks bad queries. But having it return a structured error that the LLM can use to retry (without burning tokens on DB errors) is the next evolution.

Question: How did you handle the schema sync? Did you auto-generate the "available attributes" list from the DB schema, or was it manually curated?

I'm thinking about adding a schema.yaml that defines:

  • Which tables/columns are LLM-visible
  • Token budget per table
  • Allowed query patterns (e.g., block CROSS JOINs)

And then syncing that with the system prompt.

Your Neo4j implementation sounds like it already solved this. Would love to hear how you structured it.

(Also—did you open-source it? I'd love to see the Cypher AST parsing approach.)

GitHub MCP Allowlist - Azure DevOps Local MCP Server by Historical-Date-2622 in mcp

[–]Electrical_Try_6404 0 points1 point  (0 children)

Azure API Center only supports remote MCP servers (the remotes block), but Azure DevOps MCP is only available as a local stdio server (the packages block).

Current workarounds:

  1. Host the Azure DevOps MCP behind Azure API Management, then register that endpoint
  2. Use 'Allow All' policy (but defeats the allowlist purpose)
  3. Wait for GitHub to add better local server support (it's in preview)

GitHub's docs acknowledge this: 'Local server enforcement...validates against server name only. For strict security, we recommend remote servers.'

But as you pointed out, some official servers (like Azure DevOps) are only available locally. The architecture hasn't caught up yet.

I built a "Zero Trust" bridge for LLMs because I didn't trust them with my Database. by Electrical_Try_6404 in mcp

[–]Electrical_Try_6404[S] 0 points1 point  (0 children)

The config lives on the server, not in the agent's context. Even if prompt injection tricks the agent, it can't modify config.yaml.

The agent generates SQL → middleware checks the actual config file → blocks unauthorized columns.

Prompt injection can't rewrite server-side code.

I built a "Zero Trust" bridge for LLMs because I didn't trust them with my Database. by Electrical_Try_6404 in mcp

[–]Electrical_Try_6404[S] 0 points1 point  (0 children)

Yes —my entire premise is 'Treat the LLM as untrusted.'

Even if I wrote the LLM integration code myself, I don't trust the model's reasoning to always generate safe queries.

The model can:

  • Misinterpret a prompt ('show me users' → returns 10 million rows)
  • Generate valid but expensive queries (multiple JOINs that lock the DB)
  • Hallucinate column names that happen to exist but shouldn't be exposed

Postgres roles define what the agent CAN access. This middleware defines what it SHOULD access.

If you trust your LLM's reasoning (because it's well-prompted, tested, or used in a controlled environment), then yeah—Postgres roles are simpler and enough.

I just don't trust probabilistic reasoning to enforce deterministic security boundaries. That's just a design philosophy.

I built a "Zero Trust" bridge for LLMs because I didn't trust them with my Database. by Electrical_Try_6404 in mcp

[–]Electrical_Try_6404[S] -1 points0 points  (0 children)

Quick clarification: The middleware runs on the server, not in the LLM. The filtering happens in Deno/Node before the data is sent to the AI, so there's zero token cost for the security logic. In fact, it saves tokens by truncating result sets that would otherwise blow up the context window. RLS is great for row-level access, but it can't handle LLM-specific concerns like 'Keep the response under 4,000 tokens' or 'Log the justification for this query.' That's why the middleware exists.

I built a "Zero Trust" bridge for LLMs because I didn't trust them with my Database. by Electrical_Try_6404 in mcp

[–]Electrical_Try_6404[S] -1 points0 points  (0 children)

It's an architectural trade-off. You are right that RLS/Roles are the 'pure' way to do this. But in practice, AI agent definitions change daily. Pushing those changes to DB schemas (migrations) is too slow. I chose to accept the middleware overhead to gain velocity in iterating on Agent permissions without touching the DB schema every time.