This is Claude sending your code to a different AI (Codex) for review, banking on the idea that cross-model validation catches different bug classes than self-review. The skill walks you through the codex CLI with sharp warnings about the stuff that actually breaks: bare `codex review` hangs without a scope flag, piping to tail loses the findings you care about, and you need explicit file redirection to see progress. It covers the standard patterns (pre-PR review, commit-level checks, WIP validation) and knows when to escalate from structured review to freeform security audits with `codex exec`. The core bet is architectural: different training distributions mean different blind spots, so the reviewer isn't just rechecking the author's work with the same biases baked in.
npx skills add https://github.com/hyperb1iss/hyperskills --skill codex-review