Google's Model Armor wrapped as a Claude skill for content safety filtering. You get sanitization commands for both user prompts and model responses, plus template creation for custom filtering rules. The main value is having enterprise-grade content moderation built into your workflow without writing safety logic from scratch. It's overkill for most personal projects, but if you're building user-facing AI features or need to meet compliance requirements, this handles the heavy lifting. The schema inspection commands are helpful since Google's API surface can be dense to navigate.
npx skills add https://github.com/googleworkspace/cli --skill gws-modelarmor