SteadyDancer: one portrait + one pose sequence (skeleton) → generates a smooth, identity-consistent,

Details

Model description

This workflow uses SteadyDancer to generate smooth, identity-consistent motion from a single reference image combined with a pose-video skeleton. Earlier pose-to-video systems often caused distortion—especially with stylized or unusual body proportions—but SteadyDancer avoids stretching the character to match the skeleton. The model anchors the identity to the reference frame while the Wan2.1 accelerated i2v backbone produces motion, ensuring the first video frame fully copies the original image. This makes it effective for characters with exaggerated proportions, robots, or cartoon bodies, producing motion that feels natural without forcing the geometry to match the pose too literally.

In practice, the quality largely depends on the realism of the skeleton video and the prompt used to guide style or imaginative details. Stronger pose influence may reduce tracking in extreme side rotations, but the model still keeps proportions stable. When tested with SD-Pose, the results show that tracking limits aren’t caused by inaccurate skeletons but by the inherent difficulty of extreme movement, while identity remains preserved. Realistic outputs follow the motion tightly, and stylized ones adapt just as well—even completing missing body parts when the reference lacks lower-body information. For short motion clips of a few seconds, SteadyDancer already provides a reliable and visually coherent animation pipeline suitable for both realistic and stylized characters.

Images made by this model

No Images Found.