This one gives you systematic workflows for writing and optimizing LLM prompts instead of guessing what works. It covers few-shot example selection (3-5 examples, simple to complex ordering), chain-of-thought scaffolding with verification steps, and template composition with conditional sections. The optimization workflow is solid: baseline metrics first, single-variable A/B tests, revert if accuracy drops. It loads targeted reference files for specific patterns rather than dumping everything at once. Use it when you need to debug why a prompt isn't working, structure few-shot examples that actually improve results, or build reusable prompt templates with measurable performance tracking.
npx skills add https://github.com/giuseppe-trisciuoglio/developer-kit --skill prompt-engineering