Spins up parallel AI judges with different perspectives to review your work, then consolidates their findings into a consensus verdict. Use it for validation when a single test can't capture taste, or when you need adversarial review on architecture decisions, security audits, or migration plans. The debate mode runs two rounds where judges see each other's findings and can revise their positions. It auto-detects your runtime's multi-agent capabilities and falls back gracefully. The mixed mode is clever: runs the same perspectives across Claude and another vendor so you can isolate whether disagreements are about the work or the model. Handles brainstorming and research too, but the real value is catching things mechanical tests miss.
npx skills add https://github.com/boshu2/agentops --skill council