2_steps_WAN2.2. i2i, upscale image, v2v, upscale video with t2v_low_noise_14b model
Details
Download Files
Model description
"civitai" removed all my best examples because the source was real photos/videos. You can get better results from your own/other people's photos.
To UPSCALE the image use:
DENOISE: 0.015-0.15. (the (further the object/smaller the object) in the frame, the more it is subject to change). 0.015-0.02 to maintain a person's recognizability.
SAMPLER: euler for images with artifacts and poor detail. res_2s (RES4LYF nodes) for better images.
You can also connect 2 samplers in a row, first euler, then res_2s.
For i2i you can use any denoise power, you can even generate images by adding a small picture, which will allow you not to use the t2v_high_noise_14b model, which will save you up to 14GB of RAM/VRAM.
For v2v if the denoise is too high, the frame logic will be lost and you will get a "deforum".
Unfortunately I can't test it at a higher resolution because of 8GB of VRAM.
If you train Lora on the face, you will be able to perform a very high quality ROOP, even with objects that cover the face.
/model/1817671?modelVersionId=2057100 - t2v_low_noise_14b
