Wan i2v
Details
Download Files
About this version
Model description
Updated for Wan 2.2
New version no longer uses a smooth pass.
(There isn't a small 1B version for Wan 2.2 this time.)
This workflow uses primarily GGUF quantized models to reduce vram where possible. The current version runs comfortably on 16GB of vram when using the 4 bit (q4_k_m) models.
Models Needed
Goes into models/unet
Goes into models/text_encoders
Goes into models/vae
4 step lightning loras
Goes into models/loras
Any good upscaler model.
I recommend RealEsrgan_2xPlus
Goes into models/upscale_models

