Vid2Vid Hunyuan GGUF + Upscaler 12GB & 16GB & 24GB V2V Workflow

Details

Model description

⋆.°🌸 Some considerations 🌸˚˖⋆

In addition to the nodes that can be installed with ComfyUI_Manager, you need Kijai nodes to run this workflow.

.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.

Using the hunyuan t2v 720p GGUF Q4_K_M, the average GGUF, and 73 frames gen:

For 12GB VRAM, you can upscale 240x320 by aprox. 1.3x 🫤

For 16GB VRAM, you can upscale 240x320 by aprox. 2.5x 😏

For 24GB VRAM, you can upscale 240x320 by aprox. 3.5x 😎 ~14min @ RTX3090

.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.

About input videos that aren't generated by Hunyuan:

Use the arrows in the frame load cap to set it up automatically following Hunyuan's frames rule (4*x+1).

.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.

Prompting

That's right, it's manual for now. I'm thinking of creating an automatic version using Joytag or Clip vision from OpenAi.

.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.・。゚.

JK CHSTR 2025 - Hunyuan by Tencent

Images made by this model

No Images Found.