This captures real API responses from providers and saves them as test fixtures so you can validate your parsing logic without making live calls. You run example scripts with generateText or streamText, log the raw responses, and store them in a __fixtures__ folder following the naming conventions from the OpenAI package. For streaming responses, there's a saveRawChunks helper that dumps everything to the output folder automatically. It's particularly useful when you're building provider integrations and need reproducible test data. The fixtures should be actual provider responses unless they're huge, in which case you trim them down without changing the structure.
npx skills add https://github.com/vercel/ai --skill capture-api-response-test-fixture