This generates 3D motion for humans and humanoid robots using text prompts and kinematic constraints. It's trained on 700 hours of mocap data and handles everything from simple walking animations to complex choreographed sequences with keyframes and end effector control. You get models for SOMA human skeletons, Unitree G1 robots, and SMPL-X for AMASS export. The interactive web demo is genuinely useful for sketching out constraint timelines before you commit to generation. Needs about 17GB VRAM, so expect to run this on a 3090 or better. Output quality is solid, and the post-processing handles foot skating reasonably well. Good fit if you're prototyping robot behaviors or need motion capture without actual capture hardware.
npx skills add https://github.com/aradotso/trending-skills --skill kimodo-motion-diffusion