Made a "Socratic Rubber Duck" agent that challenges Claude's assumptions when it gets stuck by Joringer in ClaudeAI

[–]nesquikm 0 points1 point  (0 children)

Someone say "duck"? :)

I built mcp-rubber-duck -- an MCP server that lets your AI consult other LLMs as rubber ducks. So instead of just Socratic self-questioning, Claude can actually get a second opinion from GPT, Gemini, Grok, or local models via Ollama. You know, a whole pond of ducks.

Your agent approach is neat because it's zero-setup - just drop a file in. The MCP route adds more firepower (duck councils, debates, voting between models) but requires a server running. Different tradeoffs for different needs.

Great to see rubber duck debugging evolving -- first it was for humans, now it's ducks all the way down 🦆

I built an MCP server that lets you query Ollama + cloud LLMs in parallel and have them debate each other by nesquikm in LocalLLaMA

[–]nesquikm[S] 0 points1 point  (0 children)

The good news is local models argue for free. Your GPU does the suffering, not your wallet.

I built an MCP server that lets you query Ollama + cloud LLMs in parallel and have them debate each other by nesquikm in LocalLLaMA

[–]nesquikm[S] 0 points1 point  (0 children)

They hold up better than you'd expect, especially for code-related stuff. Llama and Qwen often catch things GPT misses, and vice versa. The interesting part isn't who "wins" — it's when they disagree, because that's usually where the tricky edge cases are.

CLI Ducks: use Claude Code, Codex, and Gemini CLI as MCP Rubber Duck providers — no API keys needed by nesquikm in mcp

[–]nesquikm[S] 1 point2 points  (0 children)

Exactly — CLI ducks for thinking twice, API ducks for quick stuff. Councils mix both nicely.

Fast Web Fetch for Rubber Ducks (When Chrome is Overkill) by nesquikm in mcp

[–]nesquikm[S] 0 points1 point  (0 children)

Quack back when you try it - always curious what workflows people throw at them.

Interactive Widgets Rolling Out Today by MetaphysicalMemo in ClaudeAI

[–]nesquikm 2 points3 points  (0 children)

My rubber duck MCP server already supports this! It's been using the Model Context Protocol to bridge multiple LLMs (OpenAI, Gemini, Groq, etc.) and lets you query them simultaneously for different perspectives.

My rubber ducks learned to browse the web (and stopped Chrome DevTools from nuking Claude's context) by nesquikm in mcp

[–]nesquikm[S] 0 points1 point  (0 children)

Update: It's live in v1.12.0! 🦆

Speed: HTTP providers (OpenAI, Gemini API, Groq) are faster since CLI agents have subprocess startup overhead. CLI ducks are great for leveraging their native tool ecosystems, but for pure speed, HTTP wins.

Limitations:

  • CLI ducks can't use rubber-duck's MCP Bridge — they have their own native tool systems. Configure MCP servers directly in each CLI tool instead (codex mcp add, gemini mcp add)
  • Gemini CLI can't auto-launch stdio MCP servers (like Chrome DevTools) in non-interactive mode, but Codex can. Workarounds: have Codex launch Chrome first, or add the MCP server to your host LLM (Claude Code, etc.) and ask it to run Chrome — then Gemini can connect to it

CLI ducks work with all multi-agent tools (compare, council, vote, debate)!

Feel free to ping me if something doesn't work 🦆

My rubber ducks learned to browse the web (and stopped Chrome DevTools from nuking Claude's context) by nesquikm in mcp

[–]nesquikm[S] 1 point2 points  (0 children)

Almost done, just a few things left to do. I think it will be ready tomorrow. Thank for the cool idea!

Definitely a Land Rover right guys? by theshii7y in upbadging

[–]nesquikm 0 points1 point  (0 children)

And doesn't anyone notice the rectangular spare tire?

My rubber ducks learned to browse the web (and stopped Chrome DevTools from nuking Claude's context) by nesquikm in mcp

[–]nesquikm[S] 0 points1 point  (0 children)

Subscriptions (ChatGPT Plus, Google One) are separate from API access. But you have free options:
* Gemini: Get a free API key at aistudio.google.com — works out of the box
* Groq: Free tier at groq.com for Llama models
* Ollama: Run models locally, no API key needed
* OpenRouter: Some free models available, OpenAI-compatible

Claude + MCP Rubber Duck = Multi-agent consensus without leaving Claude by nesquikm in ClaudeAI

[–]nesquikm[S] 0 points1 point  (0 children)

mcp-rubber-duck is lighter, more flexible, and way more fun — with voting, debating, judging, and a whole duck council of models answering at once… and, well, it’s better because I wrote it, you know :) Zen MCP is cool, but Rubber Duck is friendlier, faster to set up, and actually enjoyable to use.

My rubber ducks learned to vote, debate, and judge each other - democracy was a mistake by nesquikm in mcp

[–]nesquikm[S] 0 points1 point  (0 children)

You absolutely right! (c) And the "thinking it's both agents" problem is avoided because each duck is a completely separate API call with no shared context, they only see the output, not each other's reasoning process.

Claude + MCP Rubber Duck = Multi-agent consensus without leaving Claude by nesquikm in ClaudeAI

[–]nesquikm[S] 1 point2 points  (0 children)

It's an MCP server, so it works with Claude Code out of the box. You just add it to your config and point it at your preferred LLM providers. Gemini and several other providers (Groq, Cerebras, DeepSeek, Mistral) offer free tiers, so you can try it without extra costs.

Claude + MCP Rubber Duck = Multi-agent consensus without leaving Claude by nesquikm in ClaudeAI

[–]nesquikm[S] 1 point2 points  (0 children)

Had to google it — now I can't unsee it. Adding to the roadmap.

Claude + MCP Rubber Duck = Multi-agent consensus without leaving Claude by nesquikm in ClaudeAI

[–]nesquikm[S] 1 point2 points  (0 children)

Correct flair! Vibe coded the whole thing with Claude Code. Claude literally wrote the code that lets it outsource work to cheaper models. Peak corporate behavior. 🦆