AnimateDiff Workflow: IPAdapter Transformations
Details
Download Files
Model description
The idea with this workflow is to allow a smooth transition between characters with a static background. It's not perfect, but it's a good start, I think. The full generation for 48 frames took 17gb of VRAM and 50 minutes for my 3090. If you're trying to save VRAM, I'd just skip the hires section of the workflow.
Full credit to Latent Vision for the base workflow:
Additional things I'd like to try
IC-light to increase lighting consistency (will this work if character/camera rotates?)
IPAdapter for faces
One last note: the Limitless Vision model is fine, but I think this would probably work better with other models like Realistic Vision or Dreamshaper. The problem I ran into was that those models don't seem to "understand" the IPAdapter images I was feeding in. More experimentation is needed!
If you are missing any models, see below for a partial list. You will also need the IPadapter/clipvision models & openpose controlnet
Rename to controlGIF_normal.ckpt in /models/controlnet: https://huggingface.co/crishhh/animatediff_controlnet/resolve/main/controlnet_checkpoint.ckpt?download=true
Put in /models/animatediff_models: https://huggingface.co/wangfuyun/AnimateLCM/resolve/main/AnimateLCM_sd15_t2v.ckpt
Put in /models/loras: https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/tree/main
Put in /models/loras: https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt
Tip jar: https://www.patreon.com/blankage



