Wan2.2 animate Change character or background (long video)

Details

Download Files

Model description

This workflow can do a couple of things: it can make a character from a reference image copy the pose from a reference video (while keeping the original background), and it can also completely swap that character into the reference video, making them perform the same actions. Based on this idea, I also tested swapping just the character's head into the reference video, but I'm still debugging some issues with the results. If I find a good solution, I'll release that workflow too. For important notes on how to use this, please see the "Workflow Testing and Usage Instructions" below.

💻I've already set up an online ➡️ workflow for you so you can quickly try out the effect.

🎁Bonus: If you're signing up for the first time, you can get 1,000 free RH Coins by using my link and the invite code ➡️rh-v1182. Plus, you'll get another 100 RH Coins for logging in daily.

🚀Workflow Testing and Usage Instructions:

1. First, a big thank you to eddy for open-sourcing several Loras. When combined with KJ's Wan2.2 animate workflow, these Loras have great performance in both character consistency and video motion. It's especially important to note that "lightx2v_elite_it2v_animate_face" already has the lightx2v acceleration built-in, so you don't need any other speed-up Loras. This Lora also helps maintain the reference character's consistency, so I recommend a strength value between 1.0 and 1.2. If you need high consistency with the reference image character, I suggest a strength of 0.35-0.5 for "WAN22_MoCap_fullbodyCOPY ED". If you want to lean more towards the character in the reference video, a strength of 0.7-1 is better.

2. Because this involves loading the Wan2.2 model and SAMSegment, it requires a lot of VRAM. That's why I've enabled WanVideo Block Swap by default. In my testing, the entire workflow can run on a 24G GPU. However, I recommend using the 48G GPUs on RunningHub for a much smoother experience.

3. For different kinds of reference videos, I've preset two different masking methods in the workflow. You only need to choose one of them. If your reference video has only one character, use the "Single-character usage" group. If your video has multiple characters and you only want to mask certain areas, use the "Multi-role usage" group.

4. This workflow can generate longer-than-usual videos, but I don't recommend using it for anything over 30 seconds. In my tests, I found that for a 20-second video, the character's consistency starts to decay around the 10-second mark, and the color tone also shifts slightly. I believe this is caused by the influence of the reference video during the context looping process. That's why I'd recommend keeping the videos you generate with this workflow to around 20 seconds and no longer than 30. Going past 30 seconds could lead to unpredictable degradation or a serious loss of consistency.

5. For more detailed instructions, please see the notes inside the workflow.

Images made by this model

No Images Found.