Multi-Model Compare + Photorealism Engine + Video Animation
Details
Download Files
Model description
Hello everyone!
I'm sharing my all-in-one workflow, the "Illustrious" Workflow, designed for artists, model creators, and anyone who loves to experiment. This is a powerful, three-stage pipeline that lets you compare checkpoints, transform your art into photorealistic images, and then bring them to life with simple animation.
🔥 Key Features 🔥
Simultaneous Multi-Checkpoint Comparison:
Load up to 6 different models and generate an image from the same prompt and settings for all of them at once.
Perfect for A/B testing models or finding the best checkpoint for a specific style.
Centralized Controls:
- One set of prompts (Positive & Negative), one Seed, one Sampler, and one size setting for all generations. This ensures a fair and perfect comparison every time.
"Illustrious" Photorealism Engine:
Powered by the Qwen-VL Image-to-Image model, this stage takes your generated images and transforms them based on a text instruction.
The default prompt transforms any art style into a stunning, high-quality photograph while preserving composition and details. You can easily change the instruction to achieve other styles!
Image-to-Video Animation:
The final stage takes the transformed photorealistic image and animates it using a dedicated I2V (Image-to-Video) model.
Create subtle, atmospheric motion, perfect for bringing portraits or landscapes to life.
Full LoRA Support:
- A main
Lora Stackerallows you to apply the same LoRAs across all 6 models simultaneously for consistent results.
- A main
⚙️ How to Use ⚙️
Install Custom Nodes: Make sure you have the following custom nodes installed via the ComfyUI Manager:
WAS Node Suite
Efficiency Nodes for ComfyUI
rgthree's ComfyUI Nodes
ComfyUI_Mira
ComfyUI-Lora-Manager
ComfyUI Impact Pack
ComfyUI Video Helper Suite
ComfyUI-KJNodes
Download Required Models: This workflow uses specific models for the transformation and video stages. Make sure to download:
Qwen-VL Models: Search for "Qwen-Image-Edit" on Hugging Face or Civitai for the required UNET, VAE, and CLIP models.
I2V Model: The workflow is configured for
wan2.2-i2v-rapid-aio-v10-nsfw.safetensors, but other I2V models should work.CLIP Vision Model:
clip_vision_vit_h.safetensors.
Load Your Checkpoints: Go to the 6 groups labeled "Model 1" to "Model 6" and load your desired
.safetensorscheckpoints in eachCheckpointLoaderSimplenode.Set Your Prompts: Edit the main "Positive Prompt" and "Negative Prompt" text boxes. These will be used for all 6 models.
Configure & Run: Set your desired image size in
EmptyLatentImage, adjust steps/CFG, and hit "Queue Prompt"!
Workflow Breakdown:
Stage 1 (Generation): Creates 6 images from your prompt using 6 different models.
Stage 2 (Transformation): The generated images are fed into the Qwen-VL engine and transformed into photorealistic versions. The result is saved in the
output/Realfolder.Stage 3 (Animation): The photorealistic image is animated into a short MP4 clip and saved in the
output/Videofolder.
This workflow has been a game-changer for my creative process, allowing for rapid experimentation and high-quality results. I hope you find it as useful as I do!
Happy generating!