This is an intent router for image-to-video generation on RunComfy that picks between three models based on what the user actually wants. If you need general portrait or product animation with native audio, it calls HappyHorse 1.0 (Arena #1, Elo 1392). If you have a custom voiceover track and need lip-sync, it routes to Wan 2.7 with audio_url. If you're composing a shot from multiple reference inputs (image + video + audio), it uses Seedance 2.0 Pro. The real value is that it bundles each model's documented prompting patterns so you don't waste iterations picking the wrong endpoint or writing prompts that don't match the model's expectations.
npx skills add https://github.com/agentspace-so/runcomfy-agent-skills --skill image-to-video