This one handles the full lifecycle of LLM prompt design, from writing your first zero-shot attempt to building evaluation frameworks that catch regressions. It walks you through the workflow (understand, design, test, iterate, deploy) and loads reference docs for specific patterns like chain-of-thought or structured outputs. The validation checkpoint at 80% accuracy is a nice forcing function to stop and diagnose failures before you keep tweaking randomly. What I appreciate is the before/after examples and the concrete constraints, like "make one change at a time when debugging" and "don't deploy without systematic evaluation." It's opinionated enough to keep you from common mistakes but flexible enough for different model families. Worth using if you're moving beyond ad-hoc prompt tweaking.
npx skills add https://github.com/jeffallan/claude-skills --skill prompt-engineer