Instruments LaunchDarkly AI Config metrics into your existing provider calls by walking a four-tier ladder from managed runners down to manual tracking. The skill picks the lowest-ceremony option that still captures duration, tokens, and success/error for the Monitoring tab. You audit the call shape (chat loop vs one-shot), check which provider package exists (OpenAI, LangChain, Vercel AI SDK), then follow the matching reference pattern. The approach is opinionated: default to the highest tier that fits, wrap the existing call instead of rewriting it, and always check `config.enabled` before hitting the provider. Covers Python and Node across a dozen frameworks and providers.
npx skills add https://github.com/launchdarkly/agent-skills --skill aiconfig-ai-metrics