GGUF: FastFlux Unchained (FluxUnchained merged in Flux.S)

Details

Download Files

Model description

FP8 - All-in-one model here: /model/671478 (but, try GGUF first!)

[Note: Unzip the download to get the GGUF. Civit doesn't support it natively, hence this workaround]

A merge of FluxUnchained and FastFlux - converted to GGUF. As a result, it can now generate artistic NSFW images in 4-8 steps while consuming very low VRAM. The Q_4_0 model consumes around 6.5 GB VRAM and takes around 1.5 min to generate a 1024x1024 image with 8 steps. [See https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050 to learn more about Forge UI GGUF support and also where to download the VAE, clip_l and t5xxl models.]

You can also combine it with other LoRAs to get the effect you want.

Which model should I download?

[Current situation: Using the updated Forge UI and Comfy UI (GGUF node) I can run Q8_0 on my 11GB 1080ti.]

Download the one that fits in your VRAM. The additional inference cost is quite small if the model fits in the GPU. Size order is Q4_0 < Q4_1 < Q5_0 < Q5_1 < Q8_0.

  • Q4_0 and Q4_1 should fit in 8 GB VRAM

  • Q5_0 and Q5_1 should fit in 11 GB VRAM

  • Q8_0 if you have more!

Note: With CPU offloading, you will be able to run a model even if doesn't fit in your VRAM.

LoRA usage tips

The model seems to work pretty well with LoRAs (tested in Comfy). But you might need to increase the number of steps a little (8-10).

All license terms associated with Flux.1 Dev and Schnell apply.

Images made by this model

No Images Found.