ComfyUI_INTEL_LOWVRAM_WAN2.2_ADD_AUDIO

Details

Download Files

Model description

Created this test workflow with MMAUDIO. It can be found in Comfy Manager.

Here are the models, vae, and everything else you'll need.

https://huggingface.co/Kijai/MMAudio_safetensors/tree/main

Here's the GitHub just in case.

https://github.com/kijai/ComfyUI-MMAudio/tree/main/mmaudio

This is new and still experimental so expect weird things to happen.

2 VERY IMPORTANT THINGS TO DO IF YOU ARE ON ANYTHING NOT NIVIDIA.

I got this workflow working by changing a few things. I run on Intel Arc B580 so no cuda for me.

I was able to make some changes to two files that let me use this with cpu. In ComfyUI\custom_nodes\comfyui-mmaudio\nodes.py I changed line 77 and 78 to

clip_frames = torch.stack([clip_transform(frame.cpu()).to("cpu") for frame in clip_frames])
sync_frames = torch.stack([sync_transform(frame.cpu()).to("cpu") for frame in sync_frames])

In ComfyUI\custom_nodes\comfyui-mmaudio\mmaudio\ext\autoencoder\vae.py

I changed the four spots where cuda was mentioned to cpu.

It has worked on a test for me but I don't generate long videos so not sure how much it can do.

Images made by this model

No Images Found.