Lightning Lora, massive speed up for Wan2.1 / Wan2.2 made by Lightx2v / Kijai
Details
Download Files
Model description
2.2
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet.
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.