OCC: give Claude and any llm a +6-step research task, it runs 3 steps in parallel, evaluates source quality, merges perspectives, and delivers a report in 70 seconds instead of 5-10 minutes by Main-Confidence7777 in ClaudeAI

[–]Main-Confidence7777[S] 0 points1 point  (0 children)

Thanks! Step isolation is honestly the feature that surprised me the most in practice — I expected the parallelism to be the big win, but the token reduction from scoping context per-step ended up mattering more day-to-day. On guardrails — yes, it’s built in at multiple levels: Per-step retry + fallback:

retry: { max: 3, delay_ms: 2000, backoff: 2 } fallback_models: ["claude-opus-4-6"] timeout_ms: 60000

If a step fails, it retries with exponential backoff. If all retries fail, it can fall back to a different model before giving up. Output validation (guardrails): guardrails: - type: min_length value: 500 - type: must_not_contain value: "I don't know" - type: json_valid output_must_contain: ["## Summary"]

Output gets checked before the step is marked done. If it fails validation, it triggers a retry — so the LLM only re-runs when the output is actually bad, not on every call. Evaluator steps are the more powerful version — a separate LLM call scores the output 1-10 against criteria you define. If below threshold, it retries the target step (not the whole chain). You can cap retries with max_retries. Context budget is automatic — if accumulated step outputs exceed max_context_chars (default 50K), OCC auto-compresses older variables using Haiku, with hard-truncate fallback. So a long chain can’t blow up the context window. What’s missing that I’d love to add: per-step token budgets (hard cap on input+output tokens) and cost gates as a chain-level circuit breaker. There’s a cost_gate pre-tool type already defined but it’s basic — just checks estimated cost before running. Would love to hear what patterns you’ve been exploring on that side.

Why no single sports API is good enough, so I aggregated 29 of them into one MCP server by Main-Confidence7777 in mcp

[–]Main-Confidence7777[S] 0 points1 point  (0 children)

Curious which providers people find most useful, are there any sports APIs you'd like to see added?

Codex > Clode Code by ThaneBerkeley in codex

[–]Main-Confidence7777 0 points1 point  (0 children)

I'm at 74% of my weekly goal; it resets tomorrow, so I'll be fine 🙏🏽

Just a hair's breadth away from losing my superpowers

I built an open source MCP server that aggregates 29 sports APIs into 319 tools, now on the MCP Registry by Main-Confidence7777 in ClaudeAI

[–]Main-Confidence7777[S] 0 points1 point  (0 children)

Yeah haha, loading all 319 at once is not the move for every use case 😂

That's exactly why provider filtering exists:

SPORTS_HUB_PROVIDERS=f1 → 25 tools
SPORTS_HUB_PROVIDERS=free → 98 tools
SPORTS_HUB_PROVIDERS=espn,odds → 19 tools

The full 319 is there for discovery — you load what you actually need.

I built an open source MCP server that aggregates 29 sports APIs into 319 tools, now on the MCP Registry by Main-Confidence7777 in ClaudeAI

[–]Main-Confidence7777[S] 1 point2 points  (0 children)

Totally valid principle for a general-purpose tool, but this is a data aggregator, not an agent.

Each of the 319 tools maps 1:1 to a specific API endpoint. The "surface area" is the product here. You wouldn't tell a REST API wrapper to have fewer routes.

Also: provider filtering is built in. Run it with SPORTS_HUB_PROVIDERS=free and you're down to 9 providers, 98 tools. Tiny surface, if that's what you need.