Flux 2 Dev - Basic Workflow (Text-to-Image + Reference Support)

Details

Model description

Here is a starter workflow for the newly released Flux 2 Dev by the ComfyUI team.

This workflow is designed to get you up and running quickly with the new architecture. It handles standard Text-to-Image generation but also includes a setup for Reference Image Conditioning (Multimodal input).

Workflow Features:

  • Standard Generation: calibrated for Flux 2 Dev.

  • Reference Inputs: I have included ReferenceLatent nodes connected to LoadImage inputs. This allows you to prompt using images (similar to IP-Adapter/Variations) combined with text.

    • Note: As written in the workflow notes, simply Bypass (Ctrl+B) the Reference nodes if you only want to do pure Text-to-Image.
  • Resolution: Defaults set to 1024x1024.

Required Models:
To use this workflow without errors, ensure you have the following models (or their equivalents) in your ComfyUI models folder(model links are included in the workflow):

  1. UNET/Diffusion: flux2_dev_fp8mixed.safetensors

  2. CLIP: mistral_3_small_flux2_fp8.safetensors

  3. VAE: flux2-vae.safetensors

How to use:

  1. Load the workflow.

  2. Ensure your checkpoints are selected in the Loaders.

  3. (Optional) Upload images to the "Load Image" nodes to test the reference capabilities, or mute them for standard prompting.

  4. Queue Prompt!

Enjoy exploring Flux 2! Let me know if you run into any issues.

Images made by this model

No Images Found.