GGUF: Flux Unchained
Details
Download Files
About this version
Model description
[Note: Unzip the download to get the GGUF. Civit doesn't support it natively, hence this workaround]
GGUF version of FluxUnchained by socalguitarist . Credit goes to him for tuning this model. I converted it to GGUF by a modified version of this script.
It can be used in ComfyUI with this custom node or with Forge UI.
See https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050 to learn more about Forge UI GGUF support and also where to download the VAE, clip_l and t5xxl models.
Which model should I download?
[Current situation: Using the updated Forge UI and Comfy UI (GGUF node) I can run Q8_0 on my 11GB 1080ti.]
Download the one that fits in your VRAM. The additional inference cost is quite small if the model fits in the GPU. Size order is Q4_0 < Q4_1 < Q5_0 < Q5_1 < Q8_0.
Q4_0 and Q4_1 should fit in 8 GB VRAM
Q5_0 and Q5_1 should fit in 11 GB VRAM
Q8_0 if you have more!
Note: With CPU offloading, you will be able to run a model even if doesn't fit in your VRAM.
Updates
V2: I created the original (v1) from an fp8 checkpoint. Due to double quantization, it accumulated more errors. So I found that v1 couldn't produce sharp images. For v2 I manually merged the bf16 Dev checkpoint and then made the GGUF. This version can produce more details and much crisper results.
All the license terms associated with Flux.1 Dev apply.








