Wraps any OpenAI-compatible API endpoint plus CLI coding agents like Claude Code, Aider, and Gemini CLI into MCP tools for querying multiple LLMs simultaneously. Exposes operations for single queries, multi-model comparisons, structured debates between models, consensus voting, and iterative response refinement. Includes conversation management, automatic failover, usage tracking, and an MCP bridge that lets your "ducks" access other MCP servers. You'd reach for this when debugging complex problems that benefit from multiple AI perspectives, comparing model outputs side-by-side, or running structured evaluations where models judge each other's responses. Supports rich HTML interfaces in MCP Apps-compatible clients.
claude mcp add --transport stdio nesquikm-mcp-rubber-duck uvx mcp-rubber-duck