The gws-modelarmor-sanitize-response skill allows Google Workspace CLI users to filter model-generated responses through a Model Armor template to ensure outbound safety before content reaches end users. It accepts model output via command-line flags or standard input and applies configured safety policies to detect and sanitize potentially harmful content. This skill is designed for developers integrating AI models into Google Workspace applications who need to implement outbound content filtering as part of their safety pipeline.
npx skills add https://github.com/googleworkspace/cli --skill gws-modelarmor-sanitize-response