AnimateDiff Workflow: IPAdapter Transformations

Details

Download Files

Model description

The idea with this workflow is to allow a smooth transition between characters with a static background. It's not perfect, but it's a good start, I think. The full generation for 48 frames took 17gb of VRAM and 50 minutes for my 3090. If you're trying to save VRAM, I'd just skip the hires section of the workflow.

Full credit to Latent Vision for the base workflow:

Additional things I'd like to try

  • IC-light to increase lighting consistency (will this work if character/camera rotates?)

  • IPAdapter for faces

One last note: the Limitless Vision model is fine, but I think this would probably work better with other models like Realistic Vision or Dreamshaper. The problem I ran into was that those models don't seem to "understand" the IPAdapter images I was feeding in. More experimentation is needed!

If you are missing any models, see below for a partial list. You will also need the IPadapter/clipvision models & openpose controlnet

Tip jar: https://www.patreon.com/blankage

Images made by this model

No Images Found.