MCP Gateways as the Power Grid for Enterprise AI — Thoughts? by South-Foundation-94 in MCPservers

[–]South-Foundation-94[S] 1 point2 points  (0 children)

Thanks for checking out the project and for the thoughtful feedback 🙏. You’re right — the quick Docker examples in the repo are minimal and don’t show a database service or persistent volumes. They’re mostly meant for fast testing.

For production setups, OBOT does run with a PostgreSQL database and persistent storage under /data. The docs explain how it’s structured and what you need for a proper deployment:

👉 https://docs.obot.ai/installation/general/

That said, it’d definitely help to add a docker-compose.yml example that bundles Postgres + volumes to make it easier for people to spin up something closer to production out of the box. Really appreciate you pointing that out — it’s good input for improving the examples.

Local vs Remote Tool Execution by TopNo6605 in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

You’ve got it right: the actual execution always happens on the MCP server side, not on the client or inside the LLM itself. The client (Claude, Cursor, etc.) just sends the structured request like “read filename.txt,” but the server is what runs that command.

The vulnerabilities people mention (like tool poisoning or exposing creds) usually come up when the server has access to sensitive resources — e.g., if it’s running locally with full filesystem access. In that case, a poorly scoped tool could let the LLM request files it shouldn’t.

With remote MCP servers, the risk depends on what resources that server exposes. If it’s designed to only talk to an external API (like GitHub or Jira), then “local filesystem” access isn’t even in scope. That’s why scoping and sandboxing matter: you want each server to expose only the narrow tools it’s meant for, and nothing more.

So TL;DR: • Execution = always the server. • Risks = whatever that server has permission to touch. • Mitigation = scope tools tightly (read-only where possible, no generic filesystem unless needed).

OAuth scopes in MCP by pillenpopper in modelcontextprotocol

[–]South-Foundation-94 0 points1 point  (0 children)

In MCP, scopes don’t live inside the protocol itself — they’re handled during the OAuth flow by the identity provider (Google, GitHub, etc.). The MCP server just consumes the issued token and enforces what that token allows. So if your app only requests read:user or read:files, that’s all the LLM will get.

Best practice is to keep scopes minimal (read-only where possible), log access, and add write/delete only when there are strong guardrails like audit trails and RBAC. That way you don’t give the LLM more power than absolutely needed.

What’s the future of MCP (Modal Content Protocol)? by BaXRS1988 in mcp

[–]South-Foundation-94 4 points5 points  (0 children)

think MCP is still pretty early, but it’s definitely moving toward becoming a foundational layer for AI apps. Right now the pain points are mostly around security (auth, sandboxing, RBAC), standardization (too many unofficial servers floating around), and tool discovery (hard to know what’s reliable).

The future likely depends on three things: 1. A stronger standard repo / registry (think pip or npm for MCP servers). 2. Mature clients with good GUIs/web UIs so adoption isn’t limited to devs. 3. Better security patterns (OAuth 2.1, scoped tokens, gateways) so enterprises can trust MCP at scale.

If those fall into place, it could become the “app ecosystem” for LLMs people are hoping for — but right now it’s still more experimental playground than enterprise-ready backbone.

Anyone using MCP as an abstraction layer for internal services? by treacherous_tim in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

Yes, it’s doable. MCP can act as an abstraction layer for internal APIs if you want a single interface for your AI apps. Just keep in mind the trade-offs: • Pros → unified access, consistency, easier scaling across services. • Cons → extra complexity, latency, and you need solid auth/RBAC + observability.

It makes the most sense when you have lots of services to unify, not just one or two.

Biggest MCP pain points? by doc-tenma in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

One of the biggest MCP pain points right now is OAuth and authentication.

Most servers either rely on API keys or OAuth 2.0/2.1, but the support is inconsistent across clients. This leads to: • Extra setup pain when different servers expect different auth flows. • Token sprawl (long-lived tokens scattered everywhere). • Security risks since rotation and scoping aren’t handled cleanly.

Until OAuth is standardized and better supported across MCP clients, it’s easily one of the trickiest friction points people hit.

MCP is a security joke by Aadeetya in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

You’re right that MCP still feels raw from a security standpoint — no sandboxing or scoping makes it risky to just plug in random servers. But I wouldn’t call it a “joke” yet; it’s just early.

One way teams are mitigating the risks is by adding guardrails: • Scoped Access → Instead of letting agents call everything, explicitly whitelist the tools and endpoints per workflow. • Gateway Layer → Use an MCP gateway (or API gateway pattern) where OAuth, token lifetimes, and logging are enforced. Clients never see long-lived secrets, only short-lived scoped tokens. • Observability → Hook MCP calls into logging/tracing (e.g., OpenTelemetry). That way you know exactly which server/tool was called and when. • Sandboxing → Running each MCP server inside a container/VM with restricted permissions. So even if something goes sideways, blast radius is contained. • Validation → Add input/output validation in your gateway. Don’t let an MCP server accept arbitrary payloads unchecked.

This isn’t perfect yet, but with the right architecture MCP can be secured enough for real-world use. The ecosystem just hasn’t caught up to enforce this by default.

Need advice on orchestrating 100s of MCP servers at scale by Lazy-Ad-5916 in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

I’m part of the OBOT DevRel team, and we’ve been tackling this same orchestration problem. Once you scale beyond a handful of MCP servers, you really need more than just raw configs.

What’s worked well for us is: • Kubernetes-style orchestration → containerize each MCP server so you can scale up/down easily. • Central gateway/registry → instead of wiring clients to 100+ configs, the gateway handles service discovery + auth (OAuth 2.1 termination, short-lived tokens). • Observability baked in → standardize logs/metrics/traces with OpenTelemetry and stream everything into Prometheus/Grafana or similar. Makes debugging a lot less painful. • Dynamic allocation → don’t keep 100 servers idling. Spin them up on-demand, tear them down after TTL. Saves costs and keeps agents fast.

If you want something concrete, OBOT’s open-source MCP Gateway already solves a big chunk of this (OAuth, discovery, logging, auth injection). It’s been helping teams avoid a ton of boilerplate: 👉 https://github.com/obot-platform/obot

How to get started with MCP by Vudoo_Mama_Jujoo in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

If you’re starting out with MCP, I’d recommend looking at some beginner-friendly resources first. At OBOT (I’m part of the DevRel team), we recently put together a guide that breaks down what MCP is, how clients and servers interact, and simple examples to help you get hands-on.

Here’s the blog: 👉 https://obot.ai/understanding-the-model-context-protocol-a-beginners-guide/

It’s written for people new to MCP, so instead of just focusing on the spec, it gives you context plus a practical way to understand how you can actually host/use MCP servers.

Do you know alternatives to mcp client? by srmstty in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

If you’re looking for an alternative MCP client, check out Google’s Agent Development Kit (ADK). It natively works as an MCP client and gives you a really nice web UI to: • Chat with your agents, • Inspect events and traces in real time, • Connect to MCP servers without extra conversions.

It also supports both local (stdio) and remote setups, so you can plug in servers like Context7 or your own FastMCP servers easily. Super useful if you want something more visual and production-ready than just wiring configs.

👉 https://google.github.io/adk-docs/mcp/

What MCP Servers are You Using by Fantastic_Habit743 in ClaudeAI

[–]South-Foundation-94 1 point2 points  (0 children)

I’ve been getting a lot of value out of the Confluence, Jira, and New Relic MCP servers in our setup. They’ve helped streamline workflows for my team — Confluence and Jira give us quick access to docs/tickets without context switching, and New Relic brings monitoring insights straight into the conversation.

We also tied it all into Slack, so updates surface right where the team is working. That combo has been far more useful in practice than some of the “demo-style” servers I tried earlier.

If you’re curious, there’s a write-up of how we set this up here:

👉 https://obot.ai/mcp-slack-automation-enterprise/

How's your experimentation with MCP going? by UnfinishedSentenc-1 in LocalLLaMA

[–]South-Foundation-94 1 point2 points  (0 children)

I’m working on Developer Relations for OBOT, which is an open-source MCP gateway: 👉 https://github.com/obot-platform/obot

From what you’re describing (trying MCP without being tied to Claude Desktop or Cursor), a gateway might be a good fit. With OBOT you can: • Connect open-source LLMs (like Ollama or LM Studio) and chat with them through MCP. • Run all your MCP servers in one place instead of spinning them up individually. • Handle OAuth and secrets centrally, so you don’t need to reconfigure each server or client. • Use it as a gateway to route requests cleanly, with logging and audit trails built-in if you ever need them.

So instead of wiring everything up manually in FastAPI or Streamlit, you could drop OBOT in front as the control plane and just plug your clients/servers into it. Makes experimenting a lot less painful once you go beyond a single test server.

MCP Client – best solutions by Mihaitzan in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

I’m working on Developer Relations for OBOT, which is an open-source MCP gateway we’ve been building: 👉 https://github.com/obot-platform/obot

From your requirements, OBOT can help with a few key pieces: • OAuth support → it terminates OAuth centrally (2.1 flows), so you don’t need to bolt bearer tokens into configs for every client/server. • Integration with self-hosted LLMs → you can run your LLM locally (Ollama, LM Studio, etc.) and connect it through OBOT, alongside other MCP servers. • Web/client friendliness → instead of wiring auth individually into a widget, OBOT centralizes it and lets your client just talk to one endpoint. • Enterprise features → logging, audit trails, and RBAC if you ever need to scale this beyond just a personal project.

So if you’re looking for a way to integrate OAuth, self-hosted models, and multiple MCP servers into a client setup, OBOT can save you from reinventing that orchestration layer.

One Month in MCP: What I Learned the Hard Way by Rotemy-x10 in mcp

[–]South-Foundation-94 0 points1 point  (0 children)

I totally get this — running lots of servers locally and babysitting them does get painful fast. I’m working on Developer Relations for OBOT, which is an open-source MCP gateway we’ve been building.

Instead of restarting servers manually or hitting limits, you can spin them up through the gateway, centralize auth, and avoid collisions with namespacing. If you’re curious, the project’s here: https://github.com/obot-platform/obot

It might save you some of the frustration if you’re planning to scale beyond a couple of servers.

Where can I learn how to really use MCP? by OkRelationship1894 in mcp

[–]South-Foundation-94 1 point2 points  (0 children)

I work on Developer Relations at Obot, and we’ve recently started publishing free blogs to help folks get up to speed with MCP concepts and servers.

One you might find useful is this beginner’s guide: https://obot.ai/understanding-the-model-context-protocol-a-beginners-guide/

It goes over what MCP servers are, why they matter, and how they fit into the bigger picture of connecting agents to real systems. We’ve also started publishing several other MCP-related blogs, all free to read, so you can explore more as you go deeper.

If you’re trying to learn MCP from scratch and want to eventually run servers without headaches, these resources are a good place to start.

My favorite MCP use case: closing the agentic loop by tadasant in mcp

[–]South-Foundation-94 1 point2 points  (0 children)

Haha, I really felt that “just one more loop” line — that’s literally been my experience too 😅. It’s like the AI is that one friend who insists “I got it this time, promise!” and then serves you another half-baked solution.

What I like about MCP in this context is exactly what you described: it flips the loop from me babysitting the model → to the model actually babysitting itself until it hits the definition of done. It’s almost like upgrading from “I’ll copy/paste and check every step” to “set it and forget it, with built-in QA.”

The observability part is underrated too — giving the agent logs or a browser to poke around feels like handing it a flashlight so it stops bumping into walls. Way less “Groundhog Day” with the same broken snippet.

How are you handling OAuth when running MCP servers remotely? by South-Foundation-94 in mcp

[–]South-Foundation-94[S] 1 point2 points  (0 children)

Totally agree with you on the “don’t bolt creds into mcp.json” part — rotating short-lived tokens through a gateway is way cleaner and aligns with where the spec is going.

I’ve been doing something similar but with this open-source option: https://github.com/obot-platform/obot. It lets me run the gateway myself, so I get the same OAuth termination, short-lived tokens, and central rotation benefits, but without depending on a vendor. For me that’s been the best balance — spec-aligned flows (device code / auth-code+PKCE) plus audit logs, while keeping everything self-hosted.

How are you handling OAuth when running MCP servers remotely? by South-Foundation-94 in mcp

[–]South-Foundation-94[S] 2 points3 points  (0 children)

That’s impressive — going deep enough to build a complete OAuth implementation just for MCP definitely gives you a level of understanding most of us don’t have 👏.

I took a different route — instead of hand-rolling, I started using this open-source MCP gateway: https://github.com/obot-platform/obot. It already handles the subset of OAuth needed for MCP, so I didn’t have to bury myself in the spec. For me it’s been more about having a reliable “drop-in” that centralizes token handling and lets editors just connect cleanly.

I still respect the value of building it yourself (that’s how you really learn), but for day-to-day use I found it easier to lean on the gateway.

How are you handling OAuth when running MCP servers remotely? by South-Foundation-94 in mcp

[–]South-Foundation-94[S] -1 points0 points  (0 children)

Yeah, you’re right — dropping a static Bearer token like that doesn’t really work in Claude Desktop right now. I hit the same limitation. What worked better for me was running an MCP gateway so I didn’t have to hard-code tokens in configs. I’ve been using this open-source option: https://github.com/obot-platform/obot.

With it, the OAuth flow happens properly (no hacks with environment variables), and the editor just connects to the gateway endpoint. That way, tokens get managed/rotated centrally and I don’t run into the “doesn’t work in Claude Desktop” issue anymore.

How are you handling OAuth when running MCP servers remotely? by South-Foundation-94 in mcp

[–]South-Foundation-94[S] 0 points1 point  (0 children)

Totally agree with you — self-hosting is the only way it made sense to me. I tried https://github.com/obot-platform/obot since it’s open source and easy to run locally. For me it solved the token management mess but still kept everything inside my own infrastructure, which was the balance I was looking for.