The ai-prompt-engineering-safety-review skill analyzes AI prompts for safety risks, bias, security vulnerabilities, and effectiveness using a structured evaluation framework covering harmful content, discrimination, misinformation, data exposure, and prompt injection attacks. It's designed for AI developers, prompt engineers, and organizations building responsible AI systems who need to identify and mitigate risks before deploying prompts in production. The skill solves the problem of ensuring AI prompts don't generate harmful outputs, expose sensitive data, or perpetuate biases by providing systematic assessment and detailed improvement recommendations aligned with responsible AI development practices.
npx skills add https://github.com/github/awesome-copilot --skill ai-prompt-engineering-safety-review