ComfyUI beginner friendly Flux.2 Klein 4B GGUF Simple Image Inpainting Outpainting Workflow by Sarcastic TOFU

詳細

ファイルをダウンロード

モデル説明

This is a very simple ComfyUI beginner friendly Flux.2 Klein 4B GGUF Simple Image Inpainting Outpainting Workflow that will work with very simple text editing instructions in natural language to edit your desired target image. You have two options of editing in one workflow - one is just standard selective inpainting without masking and another one is selective inpainting, canvas expansion & outpainting without masking. I have included prompts and examples for both editing options. I used a Q8 Model that works well with my 8GB AMD GPU. But is you have better GPU you can simply swap the model with full unquantized Flux.2 Klein 4B model or on the other side if you have a weaker GPU you can find other low grade hardware supported GGUF models from here ( Unsloth's HuggingFace repo for Flux.2 Klein 4B GGUF models - https://huggingface.co/unsloth/FLUX.2-klein-4B-GGUF/tree/main ) and other mathing resources.

The Flux.2 Klein 4B and 9B models are a new family of high-speed AI image generators that use a "rectified flow" architecture to unify image generation and professional-grade editing into a single, compact package. These models are significantly faster than older versions because they use "step-distillation," which allows them to create high-quality images in just 4 steps—achieving sub-second speeds on modern hardware—rather than the dozens of steps required by previous models. The 4B variant is released under a permissive Apache 2.0 License for both personal and commercial use, while the more powerful 9B variant uses a Non-Commercial License intended for research and personal projects. Both models support 11 native aspect ratios ranging from 1:1 square to 21:9 ultrawide and 9:21 vertical, and they can produce sharp images up to 4 megapixels (such as 2048x2048). To make them even more accessible, there are Q (Quantized) models like the FP8 (8-bit) and NVFP4 (4-bit) versions, which reduce the "brain size" of the model to save memory; specifically, the FP8 version is about 1.6x faster and uses 40% less VRAM, while the NVFP4 version is up to 2.7x faster and uses 55% less VRAM. Because of these optimizations, the 4B model can run on systems with as little as 8GB to 12GB of VRAM and can even operate (with the lowest Flux.2 Klein 4B Q2 or Q3 GGUFs) on very low grade 6GB, 4GB, 2GB VRAM GPUs or even on modern integrated graphics (iGPUs) from the latest laptop or mini PC chips.

You need a Hugging Face account to download your necessary files (Details are mentioned below). Make sure you install GGUF addon for ComfyUI using ComfyUI manager or any other missing nodes you may have and place the correct files in correct places. Also check out my other workflows for SD 1.5 + SDXL 1.0, WAN 2.1, WAN 2.2, MagicWAN Image v2, QWEN, HunyuanImage-2.1, HiDream, KREA, Chroma, AuraFlow, Z-Image Turbo and Flux. Feel free to toss some yellow Buzz on stuffs you like.

How to use this -

#1. Just select your desired Flux.2 Klein 4B GGUF (or swap it with full model) models first and now

#2. now select your image for editing

#3. on next step enter your image editing instructions. (be very precise & targeted, like examples given)

#4. then select how many output images you want (Change the number besides the "Run" button)

#5. after this select image sampling methods, CFG, steps etc. settings (you may wanna stay with defaults)

#5. finally press the run button to generate. That's it..

** If you want both selective inpainting, canvas expansion & outpainting without masking then simply bypass the "Empty Flux 2 Latent" node that passes original input images diensions to "SamplerCustomAdvanced" and use the "Empty Flux 2 Latent" node that is in the "Custom Image Dimension (Optional)"

Required Files

================

Flux.2 Klein 4B Models -

-----------------------

### Download Link for Flux.2 Klein 4B GGUF Model used

------------------------------------------------------

https://huggingface.co/unsloth/FLUX.2-klein-4B-GGUF/resolve/main/flux-2-klein-4b-Q8_0.gguf

### Download Link for Flux.2 Klein 4B Text Encoder used

--------------------------------------------------------

https://huggingface.co/Comfy-Org/flux2-klein-4B/resolve/main/split_files/text_encoders/qwen_3_4b.safetensors

### Download Link for Flux.2 Klein 4B VAE used

-----------------------------------------------

https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/vae/flux2-vae.safetensors

このモデルで生成された画像

画像が見つかりません。