Wan 2.2 A14B I2V GGUF UPUP

Details

Download Files

Model description

V1.1(BETA)

一如往常,我正在瀏覽網頁時,突然看到了一個非常吸睛的模型[此處]。我查看了它的介紹,覺得非常出色,但我有一個顧慮:它是一個 FP8 模型,我不確定我的 4060 顯卡能否處理,而且單一模型就 19GB 的大小讓我更加緊張。抱著「試試看」的心態,我發現這個模型威力無比強大。我還沒試過寫實風格的圖像,但它在動漫風格方面似乎特別出色,而且它的 VRAM 使用量與 Q8 GGUF 格式相當甚至更低,所花費的時間也是如此。

因此,我推薦你,如果條件允許,絕對必須試試這個模型;它會帶給你很酷的體驗。與舊版的 GGUF 格式相比,這個新模型在細節和風格轉換上更加準確和強大,其訓練範圍也更廣(包括 NSFW 內容),這導致了圖像生成質量的大幅飛躍。

增加了 CLIP Vision;我其實不完全確定它的功能是什麼,但似乎是讓模型知道如何構圖/運鏡?無論如何,至少它不會導致內容缺失。(我確定了這個沒有用處,你可以將其移除)

模型已經內建了加速效果,所以你不需要手動從外部添加 LIGHTX2 LORA。

我添加了一個「剪掉前 4 幀」的功能,它可以在輸出影片時裁剪掉前 4 個畫格。由於模型問題,許多 WAN I2V 模型都有這個毛病;如果輸入的圖像風格較強烈,模型傾向於過渡到更主流的風格。所以,我剪掉了變化最大的前 4 幀,以防止這個問題。這個功能可以選擇性地啟用。

​As usual, I was browsing the web when I suddenly saw a very eye-catching model [here]. I checked its introduction and thought it was excellent, but I had one concern: it is an FP8 model, and I wasn't sure if my 4060 could handle it, and the fact that the single model is 19GB made me even more nervous. Adopting a "let's give it a try" attitude, I found this model to be incredibly powerful. I hadn't tried photorealistic images, but it seems exceptionally good for anime-style, and its VRAM usage is on par with, or even lower than, Q8 GGUF, as is the time spent.

​Therefore, I recommend that you absolutely must try this model if you can; it will give you a cool experience. Compared to the older GGUF versions, this new model is more accurate and powerful in detail and style transfer, and its training scope is wider (including NSFW), leading to a huge leap in image generation quality.

​CLIP Vision was added; I'm actually not entirely sure what its function is, but it seems to let the model know how to compose the shot/camera work? Regardless, at least it doesn't cause things to be missing.(I'm sure this is useless. You can remove it.)

​The model already has a built-in acceleration effect, so you don't need to manually add the LIGHTX2 LORA externally.

​I added a "Cut the First 4 Frames" feature, which can crop the first 4 frames when outputting a video. Due to a model issue, many WAN I2V suffer from this; if a more stylized image is input, the model tends to transition to a more mainstream style. So, I cut off the first 4 frames, which undergo the maximum change, to prevent this problem. This feature can be optionally enabled.



V1.0
This workflow is WAN2.2 A14B I2V GGUF. It doesn't have any special features like infinite video or looping. It's just an optimized workflow.

This workflow has several options: - Two-step decoding - Optimize VRAM - Sage Attention - Color restoration - x2 upscaling - Frame insertion - Prompt sounds and clear VRAM. You can toggle these on and off in the "Function selection" section.

I have referred a lot to the workflow of /model/1911157/wan-22-5b-i2v-workflow this author. It can even be said that this is just the A14B version of it. If you can, you can also take a look at the author's work.

It's just that I haven't been able to solve the "dirty frames" that may appear when generating the movie. If you have ideas, you may be able to share them with me and I may add them and modify them.

Hey, I’m just a beginner, and I haven’t produced any great images yet. If you’re willing to use this workflow and produce an excellent video, please share it.

這個WORKFLOW是WAN2.2 A14B I2V GGUF。它沒有特別的功能如:無限視頻、首尾循環。它只是一個優化工作流。

此工作流有幾個選項能選擇

-兩步解碼

-優化VRAM

-SAGE ATTENTION

-顏色修復

-x2放大

-插幀

-提示聲與清理vram

這些你能在"Function selection"中開關

我大量參考了/model/1911157/wan-22-5b-i2v-workflow的工作流,甚至可以說,這只是它的A14B版本,如果可以,也可以看看該作者的作品。

只是我還無法解決生成影片時可能出現的"髒幀",如果你有想法,或許能跟我分享,我可能會加入並修改。

嘿,其實我只是初學者,我還沒有生成出效果很好的圖,如有您願意用本工作流並產出優秀video,可以分享。

Images made by this model

No Images Found.