Rebels Flux 2 GGUF

Details

Model description

THIS IS FLUX 2!

(not flux 1, do not be confused.)

This is a condensed GGUF version of the ComfyUI template workflow for FLUX 2, designed with LOW VRAM USERS in mind.

replaces the diffusion model with GGUFs. optional GGUF text encoder as well for extremely low vram users. (for some reason the fp8 works just as well as the fp16 with WAY less resource use. BE WARNED the text encoder is HUGE, around 18gb for the fp8. for some reason it runs easily on my 8gb vram card with 16gb of regular ram, i think that is because the text encoder and gguf weights arent stored on the vram card at the same time so your either just running sampling steps or encoding it but never at the same time so it should run well on most pcs.)

IMPORTANT!!! :::

FluxGuidance node is tricky, for TEXT you may need to lower the value from 11.0 to 8.0 if artifacting is too strong. i noticed the GGUF model struggles with text at lower values so i bumped it up to 11 and it corrected text but can cause artifacting on other areas of the image sometimes. try testing in between 8-11 if its not working for you. lower values lose text entirely, causing the old "flux text" issue. keep it above 8.

LINKS:

FLUX 2 GGUF

https://huggingface.co/orabazes/FLUX.2-dev-GGUF/tree/main

Text Encoder GGUF

https://huggingface.co/chatpig/flux2-dev-gguf/tree/main

Text Encoder (fp8 or fp16)

https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/text_encoders

Vae

https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/vae

Images made by this model

No Images Found.