Before you spin up a LaunchDarkly experiment or guarded rollout, this skill helps you figure out which metrics to actually track. It checks what's already configured in your project's release policies so you don't duplicate guardrails, inventories which metrics have events flowing (avoiding the classic "why isn't this populating" problem), and makes typed recommendations based on context. For experiments, it pushes you to articulate your hypothesis first, then suggests a primary metric plus guardrails and counter-metrics. For guarded rollouts, it leans conservative since each metric is a potential rollback trigger. The workflow is advisory only: it won't create or attach anything, just tell you what makes sense given what you're trying to measure.
npx skills add https://github.com/launchdarkly/agent-skills --skill launchdarkly-metric-choose