The gws-modelarmor-sanitize-prompt skill sanitizes user-generated prompts through Google Model Armor templates to detect and filter potentially unsafe content before it reaches AI models. It's designed for Google Workspace CLI users who need to implement inbound safety measures for user inputs, supporting both direct text input and stdin piping. The skill solves the problem of identifying and mitigating prompt injection attacks and harmful content in user submissions by leveraging pre-configured Model Armor templates.
npx skills add https://github.com/googleworkspace/cli --skill gws-modelarmor-sanitize-prompt