This gives you a framework for code reviews that actually catch bugs instead of bikeshedding style preferences. It enforces prioritized feedback (blocker/major/minor/suggestion), includes templates for common issues like SQL injection and missing error handling, and coordinates four QE agents to scan security, performance, and coverage in parallel. The standout feature is minimum findings enforcement: if a review scores below 3.0 on weighted findings, it automatically runs a devils advocate agent as a meta-reviewer to dig deeper. Honest take: the 400 line review limit and "ask questions not commands" guidance are the most immediately useful parts for human reviewers, while the agent orchestration handles the tedious security scanning you'd otherwise miss.
npx skills add https://github.com/proffesor-for-testing/agentic-qe --skill code-review-quality