This is for when you need LLM outputs to be consistent and reliable, not just occasionally good. It covers the essentials: structured system prompts with clear sections, few-shot examples that actually help, chain-of-thought for complex reasoning, and output format specification. The sharp edges table is honest about common mistakes like vague instructions and skipping negative constraints. What I appreciate is the emphasis on systematic evaluation over gut feel, which is where most prompt work falls apart. Treats prompts like code that needs testing and iteration, not magic incantations you hope will work.
npx skills add https://github.com/davila7/claude-code-templates --skill prompt-engineer