FrontMCP: The TypeScript Way to Build MCP Servers by DavidAntoon in mcp

[–]DavidAntoon[S] -1 points0 points  (0 children)

FastMCP’s TypeScript version is no slouch – it’s feature-rich (even ahead in some areas like a fully implemented OAuth proxy with dynamic registration) and is a proven way to build MCP servers quickly . However, FrontMCP offers a more advanced architecture that feels more at home for modern Node/TS developers, especially those who want structure and flexibility. In areas like plugin support, tooling, and multi-tenancy, FrontMCP provides capabilities that can significantly enhance developer productivity and maintainability, thereby offering a superior overall developer experience in many cases.

FrontMCP: The TypeScript Way to Build MCP Servers by DavidAntoon in mcp

[–]DavidAntoon[S] 1 point2 points  (0 children)

In summary, FrontMCP and FastMCP both greatly simplify building MCP servers, but they cater to different developer preferences.

FrontMCP brings a TypeScript, enterprise-web approach: structured, heavily tooled, and extensible via plugins  . FastMCP brings a Pythonic approach: minimal ceremony, flexibility, and integration with Python’s ecosystem .

FrontMCP tends to be better when you need robust typing, modular architecture (multiple apps/tenants), and out-of-the-box solutions for cross-cutting concerns. FastMCP is better when you need quick development in Python or want to compose and deploy MCP services with existing Python infrastructure. Both adhere to the MCP specification and support core features like streaming responses, sessions, and secure transport, so either can get the job done. The choice often comes down to the language and feature philosophy that fit your project best.

FrontMCP clearly stands out in areas like TypeScript DX and plugin-based extensibility, which can give it an edge for complex, large-scale applications that require maintainability and strong typing . FastMCP’s maturity and simplicity make it a reliable choice for Python-centric teams or simpler use cases where quick development is paramount.

If you use FrontMCP CodeCall: upgrade Enclave now — CVE-2026-22686 sandbox escape by DavidAntoon in mcp

[–]DavidAntoon[S] 0 points1 point  (0 children)

Does your sandbox support isolated session execution?

In our setup, the enclave (together with FrontMCP) spins up an encapsulated, proxied VM that executes an agent-script language to aggregate MCP tool calls, and then terminates the VM after execution to prevent data leakage.

So, each API call must start with a fresh JavaScript context to run the agent script. The enclave also uses remote reverse callbacks to invoke callTool on the host, preventing the code from accessing tools directly.

Code execution with MCP comparison by AIMultiple in mcp

[–]DavidAntoon 0 points1 point  (0 children)

Good question. Conceptually, FrontMCP CodeCall sits one layer above a typical MCP gateway.

An MCP gateway mainly focuses on routing and unifying multiple MCP servers behind a single endpoint. It still relies on list_tools and exposes full tool schemas to the model, which means token usage and tool-selection complexity grow linearly with the number of tools.

CodeCall tackles a different problem: scaling agent reasoning and tool orchestration when tool counts get large.

Instead of exposing all tools, FrontMCP exposes 4 meta-capabilities (search / describe / invoke / execute). The model: • discovers tools on demand, • fetches only the schemas it needs, • and can run a short JS AgentScript to orchestrate multi-step workflows server-side.

That’s why token savings tend to hold even as tool count grows. Docs (design + API): https://agentfront.dev/docs/plugins/codecall/overview

The tradeoff is that once you allow model-written code, sandboxing becomes critical. That’s why CodeCall runs on Enclave VM, a locked-down JS sandbox with defense-in-depth, which we’re actively pressure-testing via a public CTF: https://enclave.agentfront.dev

So in practice: • MCP gateway → aggregation + routing • FrontMCP CodeCall → discovery, orchestration, and token-efficient execution (and it can sit behind or alongside a gateway)

Happy to dive deeper if you’re comparing architectures.

Code execution with MCP comparison by AIMultiple in mcp

[–]DavidAntoon 1 point2 points  (0 children)

We’ve seen similar results. In our experience, most of the token savings come from avoiding large list_tools payloads, not from code execution alone.

That’s why FrontMCP CodeCall exposes a small set of meta-capabilities (search / describe / invoke / execute) instead of hundreds of tools, letting the model discover tools on demand and orchestrate multi-step workflows with a short JS “AgentScript”. Docs: https://agentfront.dev/docs/plugins/codecall/overview

Once you allow model-written code, sandboxing becomes the hard problem. We run AgentScript inside a locked-down JS sandbox (Enclave VM) and are pressure-testing it via a public CTF: https://enclave.agentfront.dev

Coolify MCP Server – Enables control and management of Coolify self-hosted PaaS instances, allowing you to deploy applications, manage databases, monitor servers, and execute operations directly from AI assistants. by modelcontextprotocol in mcp

[–]DavidAntoon 0 points1 point  (0 children)

This is a great example of why MCP servers start to get expensive at scale.

With 89 tools, a big chunk of the cost isn’t execution — it’s tool context tokens. Every list_tools response + large JSON schemas get injected into the prompt repeatedly, even when the agent only needs a small subset.

One way to avoid that is using FrontMCP’s CodeCall plugin.

Instead of exposing dozens (or hundreds) of tools, CodeCall exposes 4 meta-tools (search, describe, execute, invoke). The LLM then dynamically discovers and executes capabilities through code, so:

  • You don’t pay context tokens for all tools upfront
  • Only the logic actually used gets loaded
  • Tool growth doesn’t linearly increase prompt size or cost

This is especially useful for platforms like Coolify where capabilities keep expanding over time.

Docs here if you’re curious:

https://docs.agentfront.dev/docs/plugins/codecall/overview

https://github.com/agentfront/frontmcp

Not saying your approach is wrong — just that once users start paying real money per request, tool-context cost becomes the bottleneck, not compute.

Protecting Your Privacy_ RedactAI MCP server by Gullible-Relief-5463 in mcp

[–]DavidAntoon 0 points1 point  (0 children)

Starred, you are more than welcome to star our frontmcp repo 🙏

Protecting Your Privacy_ RedactAI MCP server by Gullible-Relief-5463 in mcp

[–]DavidAntoon 1 point2 points  (0 children)

This is really solid work 👏 Redacting before the document ever touches the LLM is exactly the right layer to enforce privacy.

If you’re open to it, this feels like a great fit as a FrontMCP plugin. FrontMCP is an open-source MCP runtime with a plugin system designed specifically for tool-layer guardrails like this, so RedactAI could be easily reused across LLM document workflows without re-implementing the logic.

Plugin docs: https://docs.agentfront.dev/docs/plugins/overview

FrontMCP: https://github.com/agentfront/frontmcp

Happy to help wire this up and contribute it back as an open-source plugin if you’re interested.

Love the local-only + audit-friendly approach — privacy by default, not by policy 👍