Wan 2.2 I2V for Low VRAM (GGUF)
Details
Download Files
About this version
Model description
My personal modification of HazardAI's Wan I2V workflow, modified to use quantized GGUF models instead of FP16 checkpoints so that it can run on low VRAM.
Models
Unet
The following files should be saved into /models/unet
Text Encoders
The following files should be saved into /models/text_encoders
VAE
The following files should be saved into /models/vae
LoRAs (for speed)
The following files should be saved into /models/loras
