This spawns code reviewers on the opposite AI model to challenge your work from adversarial angles. If you're using Claude, it fires up Codex reviewers via CLI. If you're Codex, it runs Claude. The reviewers attack from distinct lenses like Skeptic, Architect, and Minimalist, grounded in principles from your brain directory. You get a synthesized verdict with findings, then apply your own judgment about what to accept or reject. It's designed for post-cook sessions with large diffs or after planning phases. The hard part is the cross-model requirement, which means actual CLI calls, not subagents. Worth running when you want genuine friction, not rubber stamp validation.
npx skills add https://github.com/poteto/noodle --skill adversarial-review