This is an unattended optimization loop that commits every improvement and reverts every failure until you interrupt it or hit a goal. You give it one measurable target like "eliminate all any types" or "get coverage above 80%," it infers the verify command and guard rails, then runs modify-test-keep/revert cycles in git without asking permission. Inspired by Karpathy's autoresearch concept but generalized to any codebase metric, not just ML training. The interesting bit is the escalation protocol: after three failed attempts it refines the approach, after five it pivots strategy, after two pivots it searches the web to unstick itself. Useful when you have a clear metric and want to let the agent grind on it overnight while you sleep.
npx skills add https://github.com/aradotso/trending-skills --skill codex-autoresearch-loop