Creates Google Cloud Model Armor templates for filtering AI model inputs and outputs against security threats like jailbreak attempts. You'll use this when setting up content safety guardrails for production AI applications that need to block prompt injection, harmful content, or other model abuse patterns. Ships with a jailbreak detection preset, but you can pass custom JSON configurations for specific filtering rules. The resulting templates integrate with Google's AI services to sanitize prompts before they hit your models and responses before they reach users. Essential for any customer-facing AI feature where you can't manually review every interaction.
npx skills add https://github.com/googleworkspace/cli --skill gws-modelarmor-create-template