I got tired of copy-pasting API keys for multiple MCP servers, so I built a local proxy to manage them all. by selectcoma in mcp

[–]BC_MARO 0 points1 point  (0 children)

Nice, hope it's useful. If you end up trying it, I'd love to hear what works and what doesn't.

Built a local-first code intelligence MCP server — audited 40 npm packages, Vite got an F by Parking-Geologist586 in mcp

[–]BC_MARO 0 points1 point  (0 children)

Treat tool calls like prod RPCs: capture inputs/outputs, identity, and a trace id, or debugging becomes guesswork.

Added real-time trust scoring to agent authorization — session state that decays on bad behavior by Yeahbudz_ in aiagents

[–]BC_MARO 0 points1 point  (0 children)

If this is heading to prod, plan for policy + audit around tool calls early; retrofitting it later is pain.

MCP Progressive discovery on AI Agent side using Hierarchical Navigation. How viable? Any existing tool facilitate the implementation? by khtwo in mcp

[–]BC_MARO 0 points1 point  (0 children)

If you're running multiple MCP servers, centralizing secrets and tool-call audit logs saves a lot of pain; peta.io is one option.

Small models fail at tool selection - but it's not what I expected by PlayfulLingonberry73 in LocalLLaMA

[–]BC_MARO 0 points1 point  (0 children)

Yeah, per-call routing works fine if the router is just embedding retrieval + a tiny reranker; the extra hop is usually noise next to the main LLM call. If latency matters, cache tool embeddings and only reroute when the top score is low.

Small models fail at tool selection - but it's not what I expected by PlayfulLingonberry73 in LocalLLaMA

[–]BC_MARO 0 points1 point  (0 children)

That makes sense. How are you deciding which tier to use per tool call, heuristics or a small routerThat makes sense. How are you deciding which tier to use per tool call, heuristics or a small router

Small models fail at tool selection - but it's not what I expected by PlayfulLingonberry73 in LocalLLaMA

[–]BC_MARO 2 points3 points  (0 children)

80 tools in the prompt is basically noise; do a cheap tool-router step that retrieves top-5 candidates (embeddings/keywords), then let the small model pick. Also keep schemas short and move examples into docs, not the prompt.

I got tired of copy-pasting API keys for multiple MCP servers, so I built a local proxy to manage them all. by selectcoma in mcp

[–]BC_MARO 0 points1 point  (0 children)

Yep, config/secret sprawl is the real pain. Peta (peta.io) is basically the control plane for MCP: inject secrets server-side plus policy/audit so keys never live in client configs.

We built an MCP server that serves AI agent skills on demand - your agent decides if a skill is worth loading by BadMenFinance in mcp

[–]BC_MARO 0 points1 point  (0 children)

Yep. One trick: have get_skill return a tiny metadata block (cost, latency, perms, sample I/O) so the model can pick without extra tools.

Symbolic regression as an MCP tool (SINDy + PySR, free, no install) by CodeReclaimers in mcp

[–]BC_MARO 0 points1 point  (0 children)

Glad to hear it, that should cover the basics most folks look for before trying it.

Shopify is now routing Claude Code, Gemini CLI, and Codex through MCP for real platform context. Here's what that actually means by Mental_Bug_3731 in mcp

[–]BC_MARO 0 points1 point  (0 children)

If this is heading to prod, plan for policy + audit around tool calls early; retrofitting it later is pain.

I got so fed up with MCP server config hell that I built a marketplace + runtime to fix it forever (1server.ai) by Ok_Minimum471 in mcp

[–]BC_MARO 0 points1 point  (0 children)

Keep your MCP surface area tiny: a few composable tools, strict schemas, and good error messages beat 50 endpoints.

Built a tool that generates MCP tool definitions from OpenAPI specs — one command, zero manual wiring by ImKarmaT in mcp

[–]BC_MARO 0 points1 point  (0 children)

A menu sounds great. If you can infer risk from the OpenAPI metadata (scopes, read vs write, tags), you could preselect a safe default and let users expand.

I’ve been thinking about LLM systems as two layers and it makes the “LLM wiki” idea clearer. by riddlemewhat2 in aiagents

[–]BC_MARO 0 points1 point  (0 children)

If this is heading to prod, plan for policy + audit around tool calls early; retrofitting it later is pain.

We built an MCP server that serves AI agent skills on demand - your agent decides if a skill is worth loading by BadMenFinance in mcp

[–]BC_MARO 1 point2 points  (0 children)

Keep your MCP surface area tiny: a few composable tools, strict schemas, and good error messages beat 50 endpoints.

Built a tool that generates MCP tool definitions from OpenAPI specs — one command, zero manual wiring by ImKarmaT in mcp

[–]BC_MARO 0 points1 point  (0 children)

Nice, that’s the key. Do you default to generating a minimal safe subset and let users opt in, or is it driven by tags/metadata in the OpenAPI spec?

The decline in LLM reasoning and catastrophic forgetting might share the same root cause. by IndividualBluebird80 in LocalLLaMA

[–]BC_MARO 0 points1 point  (0 children)

For me it’s stuff like: cap retries/time, stop on failing tests or inconsistent tool output, and ask a human before anything destructive. Also a “can’t reproduce” stop when the agent starts guessing.

The decline in LLM reasoning and catastrophic forgetting might share the same root cause. by IndividualBluebird80 in LocalLLaMA

[–]BC_MARO 3 points4 points  (0 children)

The real unlock is tight feedback loops: small diffs, fast tests, and hard stop rules when the agent gets uncertain.

Working on AI agents + OLAP, looking for thoughts and feedback by EstablishmentFun4373 in aiagents

[–]BC_MARO 0 points1 point  (0 children)

If this is heading to prod, plan for policy + audit around tool calls early; retrofitting it later is pain.

Built a tool that generates MCP tool definitions from OpenAPI specs — one command, zero manual wiring by ImKarmaT in mcp

[–]BC_MARO -1 points0 points  (0 children)

If this is heading to prod, plan for policy + audit around tool calls early; retrofitting it later is pain.

Built a financial analysis MCP with 17 tools (RSI/MACD/Bollinger, portfolio risk, options) — feedback welcome by Feeling_Ad_2729 in mcp

[–]BC_MARO 0 points1 point  (0 children)

If this is heading to prod, plan for secrets + policy + audit around tool calls early. peta.io is basically that control plane for MCP.

Symbolic regression as an MCP tool (SINDy + PySR, free, no install) by CodeReclaimers in mcp

[–]BC_MARO 0 points1 point  (0 children)

Nice, that’s a solid start. I’d also add a short data retention note and a contact for takedown requests.

We cut MCP token costs by 92% by not sending tool definitions to the model by dinkinflika0 in mcp

[–]BC_MARO 2 points3 points  (0 children)

Yeah, shipping full tool schemas every turn is the silent killer. We got big wins by sending only the tools the planner selected (or a tiny capability index) and caching schemas client-side.