Takes you from concept to finished AI video in one workflow. You brainstorm ideas, build a storyboard through a five-dimensional dive (content, visuals, camera, motion, audio), then it generates reference images with Seedream 4.5 and submits everything to Seedance 2.0 for video generation. Videos take about 10 minutes to render and you get flexible output options: 16:9, 9:16, square, ultrawide, plus duration control from 4 to 15 seconds. Two generation modes give you either omni reference (throw in up to 9 images, 3 videos, 3 audio files) or keyframe-based first/last frame control. Gracefully falls back to standalone Python if the MCP service is down, which is smart given how flaky external APIs can be.
npx skills add https://github.com/hexiaochun/seedance2-api --skill seedance2-api