If you're working with ByteDance's Jimeng Seedance 2.0 for multimodal video generation, this skill teaches you the @ reference syntax that actually matters. You'll learn how to assign roles to uploaded images, videos, and audio (up to 12 files combined), structure time-segmented prompts for longer videos, and use proper camera terminology like Hitchcock zooms and whip pans. The guide covers practical workflows for character consistency, camera movement replication, video extension, and effects cloning. It's essentially a translation layer between what you want and what the model needs to hear, complete with constraint tables and example prompts that break down 15-second sequences by the second. Worth reading if you're tired of guessing why your video generations miss the mark.
npx skills add https://github.com/dexhunter/seedance2-skill --skill seedance-prompt-en