GGUF (Flux) workflow v2.0
Details
Download Files
About this version
Model description
--- v2.0 has 3 LoRA spots, Save Image with Metadata, Optional Upscale and just More Neat than the quick share ---
Quick share for those who ask in comments, it's about this model (used Q8)
/model/647237?modelVersionId=724149
Can also be used with F16, better quality than Q8 but harder on your PC
/model/662958/flux1-dev-gguf-f16
GGUF models direct downloads here
https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main
😂 the provided prompt is a parody on the outcome of the Google Search meaning of Lada 😂
If Q8 is on the edge of your VRAM, you can also use Q6_K which is smaller and almost the same quality!










