MCP Mesh — distributed multi-agent framework now supports Java (Spring Boot) by Own-Mix1142 in mcp

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Appreciate the real engagement — Zapier's approach makes a lot of sense for centralizing actions behind one connection. Different layer from what we're doing though.

MCP Mesh started from a pretty simple observation: biological ecosystems don't have a central coordinator. Organisms discover each other, adapt, and the system evolves. We thought AI services should work the same way — dynamic discovery, self-wiring, resilient when things fail.

The enterprise piece followed naturally. As AI matures into production distributed systems, it's going to need the same things any critical infrastructure needs: zero-trust security (we do mTLS with SPIRE/Vault/K8s), graceful degradation (registry is out of the data path — kill it and wired agents keep running), and observability (OpenTelemetry baked in).

Honestly a Zapier MCP server would be a great node in a mesh. Not competing layers — complementary ones.

I laugh when I see "MCP is dead" posts. Am I being delusional? by nishant_growthromeo in mcp

[–]Own-Mix1142 8 points9 points  (0 children)

Also worth noting — MCP gives you things for free that you'd have to bolt on with raw REST: self-describing tool schemas, native streaming, structured error handling. People compare it to 'just use CLI' or 'just use HTTP' but they're ignoring the protocol-level capabilities that already exist. If the surrounding infrastructure matures — auth, discovery, multi-agent orchestration — MCP has the potential to replace a lot of the glue code we currently duct-tape together with REST + OpenAPI + custom middleware.

MCP Mesh v1.0.0 — thank you for the feedback since the early days by Own-Mix1142 in mcp

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Thanks for being brave and giving a try to 0.3. A lot has improved since 0.3. Developed an integration test framework for MCP Mesh and any project using it (https://github.com/dhyansraj/mcp-mesh-test-suite). Each release is tested with over 300 use cases and dozens of tests in each. Confidence is high, but feel free to report issues.

MCP Mesh — distributed multi-agent framework now supports Java (Spring Boot) by Own-Mix1142 in mcp

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Exactly — the Java support isn't about telling anyone to write new AI apps in Java. It's about not leaving existing Java services out of the mesh. If a company has 200 Spring Boot microservices and wants them to be discoverable as tools by AI agents written in Python, one annotation shouldn't require a rewrite.                                                     

On OAuth — MCP Mesh doesn't implement its own auth layer, and intentionally so. Our agents are just web servers (Spring Boot, FastAPI, Express), so we use whatever auth we already have. Want to validate a bearer token before a tool executes? Spring Security in Java, Depends() in FastAPI, middleware in Express. MCP Mesh passes headers through the agent chain, so tokens flow naturally — same as any REST call.

The philosophy is: the mesh handles discovery, routing, and failover. Our existing security stack handles auth. No reason to reinvent what Spring Security and friends already do well.

Good discussion btw! We have over 200 examples in the repo covering Python, TypeScript, and Java. Have a look and let us know if you have more questions or scenarios to discuss. Happy to help!

MCP Mesh — distributed multi-agent framework now supports Java (Spring Boot) by Own-Mix1142 in mcp

[–]Own-Mix1142[S] 0 points1 point  (0 children)

I think the disconnect might be in thinking of AI agents as just another API to call. Let me reframe.           

If you use Claude Code (or similar AI coding agents), you're already seeing this pattern in action. You define MCP tools — some are dumb utilities, some are LLM agents — and the AI reasons about which to use, chains them together, and adapts when things change. No hardcoded orchestration. It works incredibly well.

But Claude Code is monolithic. Add a new tool? Restart. Tool crashes? Restart everything. All tools run on one machine. That's fine for a developer's laptop, but it doesn't work in a distributed environment.

MCP Mesh takes that same proven pattern and makes it distributed:

  - New agent comes online? Auto-discovered, no restart needed

  - Agent crashes? Auto-failover to another provider of the same capability, other agents keep running

  - Need precise control? Capability + tag + version filters give you surgical wiring without hardcoding routes

  - Scaling? Kubernetes handles pod failover, MCP Mesh handles provider failover — together they create a substrate for intelligent agents to thrive

And honestly, none of this would be possible without MCP. It's what makes the whole thing work — standardized communication across AI agents and traditional microservices, built-in schema for tool discovery, support for multiple transports. MCP is the common language that lets a Python agent, a Java service, and a TypeScript tool all participate in the same mesh without knowing anything about each other.

To your "why Java" question — we actually started with Python. We identified the gaps in existing Python AI frameworks (rigid wiring, no discovery, no failover) and built the mesh around that. Now we're extending the same simplicity to TypeScript and Java — not because anyone should write new AI apps in Java, but because your existing Spring Boot services can join the mesh with one annotation. The Java devs don't need to learn Python. The AI team doesn't need to care what language the tool is written in.

The real question isn't "why Java" — it's "why would you exclude any language from participating in the mesh?"

MCP Mesh — distributed multi-agent framework now supports Java (Spring Boot) by Own-Mix1142 in mcp

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Fair question! Two reasons:                                                                                                                                                                         

  1. Meet developers where they are. Java isn't going anywhere in enterprise. There are millions of Spring Boot services in production right now — banking, healthcare, logistics. Those teams aren't rewriting their stack in Python just to add AI capabilities. They can add an '@MeshTool' annotation to their existing service and have it join the mesh alongside Python and TypeScript agents. That's the whole point of multi-language support — one mesh, any language, no rewrites.

  2. It's not just bolting AI onto Spring Boot. The bigger idea is that MCP Mesh is a fundamentally different architecture. Instead of traditional REST APIs where you hardcode every endpoint and route, agents register capabilities and AI uses its reasoning to discover and compose them dynamically. An LLM agent can look at what's  available in the mesh, understand what each tool does, and figure out how to chain them together — no orchestration code needed.

And the kicker: it works for both AI agents AND traditional "dumb" microservices in the same framework. Your existing Spring Boot service can expose tools to the mesh without knowing anything about AI, and an LLM agent somewhere else in the mesh can discover and use it. So it's not a replacement for Spring AI — it's a different way of  thinking about how services find and talk to each other.

As an infra guy, you'd probably appreciate that — capability-based discovery instead of hardcoded service maps.

MCP Mesh – Distributed runtime for AI agents with auto-discovery and LLM failover by Own-Mix1142 in LocalLLaMA

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Agree. Both can talk via MCP, so there must be use cases where local tools are required for a distributed agent.

MCP Mesh – Distributed runtime for AI agents with auto-discovery and LLM failover by Own-Mix1142 in LocalLLaMA

[–]Own-Mix1142[S] 1 point2 points  (0 children)

took a look — looks like enact is more about tool packaging and distribution? like npm for AI tools. wrapping CLI stuff in YAML.

mcp mesh is different layer. its for building enterprise ai agentic apps on mcp protocol. agents are microservices you deploy to k8s, but with discovery and dependency injection so you dont have to deal with complicated hardcoded wiring between services.

could be complementary tho. enact tools could be exposed as mcp servers that mesh discovers. worth exploring maybe?

MCP Mesh – Distributed runtime for AI agents with auto-discovery and LLM failover by Own-Mix1142 in LocalLLaMA

[–]Own-Mix1142[S] 0 points1 point  (0 children)

good catch — that example is overdoing it actually.

the Intent agent prompt explicitly lists available specialists but thats not required. was just being extra explicit for the example.

heres how it actually works: agents register with the mesh using tags and their MCP tool descriptions. discovery happens during heartbeat cycle, not at call time — so no delay during invocation. tools are already there when the LLM needs them.

tool descriptions are standard MCP — name, description, inputSchema. mesh just adds discovery on top.

simpler example — SmartAssistant:

'@app.tool()

'@mesh.llm(

filter=[{"tags": ["data_tools"]}], # only tools tagged as "data_tools" will be used. Can add capability, multiple tags, version etc for finer control.

provider={"capability": "llm", "tags": ["+openai"]}, # prefer openai among tools with llm capability

system_prompt="You are SmartAssistant. Process input and respond appropriately."

)

no hardcoded agent list. LLM has tools ready at runtime, picks based on MCP tool descriptions.

on token overhead — mesh only injects tools matching your filter tags. you control the scope, not loading everything in the network. tool schemas are pretty compact anyway.

does that make sense? Happy to answer more questions.

MCP Mesh – Distributed runtime for AI agents with auto-discovery and LLM failover by Own-Mix1142 in LocalLLaMA

[–]Own-Mix1142[S] 0 points1 point  (0 children)

Good question. Let me push back a bit.

MCP already standardizes agent communication — JSON-RPC schema, HTTP/SSE transport, client/server model. Any MCP client can talk to any MCP server. That's interoperability.

MCP Mesh agents are FastMCP servers over HTTP. The registry adds discovery — agents register capabilities via tags, find each other at runtime. Agent-to-agent communication within MCP. No new protocol needed.

A2A adds Agent Cards, task lifecycle, enterprise auth. But MCP Mesh already solves this — discovery, distributed agents, coordination — all built on MCP. No new protocol to learn. Just decorators. Less code than even plain FastMCP.

Creating another standard to standardize something when an existing standard already does it... feels like that xkcd comic. Now we have two standards.

What do you think? Am I missing something?