fatberg_slim Image 2 Video Workflows
Details
Download Files
Model description
My ComfyUI Image 2 Video Workflows
I’ve been asked a few times about my workflow, so here it is.
Some people had issues loading the workflow from my videos, so I decided to upload them directly.
These are the setups I use to create my I2V videos.
You’ll find notes inside the workflows explaining what some of the nodes do.
You’ll probably need SageAttention and Triton installed.
It might still work without them if you rewire a few nodes. I left a note about that in the workflow, but I can’t guarantee it’ll run properly.
I didn’t build these workflows completely from scratch. I started with an existing one (don't know which one exactly) and just added whatever seemed useful for my setup.
I’m not an expert, so please keep in mind that I can only offer limited support if something doesn’t work right.
A Little Disclaimer
Before you ask - there’s no magic combination of settings I’m using to create my videos.
It’s honestly more trial and error than you’d expect. Sometimes I let my PC run overnight and wake up to 40 clips…
Out of those, maybe 2-3 are worth keeping. The rest are either hilarious, nightmare fuel, or just plain trash.
So don’t be discouraged if your first results look weird. That’s part of the fun.
Missing Files?
If you get a message about missing files when loading the workflow, don’t panic.
You can usually find those files just by googling their exact file names and downloading them into the matching folders inside your ComfyUI installation. Missing custom nodes can be installed via ComfyUI Manager.
Please don’t ask me where to get the files — I can’t provide help with that.
About “missing unet/clip” Warnings
You might see messages like this when running the workflow:
clip missing: ['encoder.block.0.layer.0.SelfAttention.q.scale_weight', ...]
That’s normal. It just means the checkpoint you’re using contains extra parameters (e.g. from a slightly different CLIP/T5 variant or a weight-normed build) that don’t have a 1:1 spot in your current text encoder/UNet. ComfyUI logs them as “missing,” but the model still loads and runs fine. If your outputs look normal, you can safely ignore these messages.
The Workflows
There are two versions:
I2V WAN MoEKsampler
I2V WAN Ksampler
I mainly use the WAN MoEKsampler workflow.
If you want to know exactly what it does, check out the GitHub page:
In short: it automatically splits the two samplers based on sigma values from the tensor.
So you don’t have to do any manual splitting. Just set your steps and hit Run.
If you can’t or don’t want to use the WAN MoEKsampler, there’s also a version with the standard KSampler Advanced.
That one works the same way, except you’ll need to handle the step splitting yourself.
Output Info
Both workflows:
Save the last frame after VAE decode, before any upscaling — this gives you a clean base image for the next run.
Export both a 16 fps version and an upscaled + interpolated 32 fps version.
Just make sure to set your save paths on those nodes before running.
