Production Dual-Phase Refiner - LoRA Manager + Auto Metadata + Memory Optimized Workflow
Details
Download Files
About this version
Model description
Production Dual-Phase Refiner - LoRA Manager + Auto Metadata + Memory Optimized Workflow
Overview
The Production Dual-Phase Refiner is a professional-grade ComfyUI workflow designed for producing exceptionally detailed, high-quality images through a sophisticated two-stage refinement process with integrated LoRA Manager support and automatic metadata embedding. This workflow gives you complete control over your image generation pipeline, allowing you to create professional-grade outputs while maintaining efficient memory management for large batch processing. Visual LoRA selection, automatic A1111-compatible metadata embedding, and strategic memory optimization ensure streamlined workflow operation, perfect reproducibility, and crash-free batch processing at scale.
Model Compatibility
Skill Level: This is an ADVANCED workflow designed for users who are comfortable with ComfyUI's node-based interface and understand concepts like samplers, schedulers, denoise strength, and model loading. If you're new to ComfyUI, consider starting with simpler single-phase workflows before tackling this production system.
Optimized For: This workflow has been extensively tested and performs best with SDXL, Pony, Illustrious, and NoobAI base models. The creator has achieved 98% success rates across these model families with the current settings and model configurations.
Important - Switching Base Models: The workflow comes pre-configured with KSampler settings optimized for the default base model. If you swap to a different base model (especially between model families like SDXLβPony or PonyβIllustrious), you will need to adjust the KSampler parameters (steps, CFG scale, sampler type, scheduler) in both Phase 1 and Phase 2 to match that model's optimal settings. Different models have different sweet spots - what works perfectly for one checkpoint may produce poor results with another.
SD 1.5 Models: This workflow has not been tested with SD 1.5 base models, though some SD 1.5 LoRAs may work when used with SDXL-based checkpoints.
Flux Models: This workflow is not designed for Flux-based models. A separate dedicated Flux workflow is currently in development for those workloads.
Recommendation: For best results, use SDXL-family checkpoints (including Pony, Illustrious, and NoobAI derivatives) with this workflow. Be prepared to tune KSampler settings when experimenting with different base models.
Key Features
Automatic Metadata Embedding System
One of the workflow's most powerful professional features is complete automatic metadata capture and embedding:
A1111-Compatible Format - Every image generated includes full metadata in Auto1111-compatible format, ensuring seamless compatibility with CivitAI and other platforms. No manual data entry required - ever.
Complete Provenance Tracking - The Debug Metadata (LoraManager) nodes automatically capture:
All generation parameters (steps, CFG, sampler, scheduler)
Complete LoRA information with weights
Model/checkpoint details
Prompt and negative prompt
Seed values for perfect reproducibility
Image dimensions and technical settings
Professional Workflow Benefits:
Zero documentation overhead - Every successful generation is self-documenting
Perfect reproducibility - Always know exactly what created each image
CivitAI optimization - Proper metadata improves discoverability and search indexing
Client/collaboration ready - Share exact settings without manual transcription
Portfolio quality - Demonstrates technical professionalism with complete provenance
Business Intelligence - When processing 200-400 images in batches, automatic metadata means you can identify your best performers and know exactly what settings created them, without detective work or guesswork.
Integrated LoRA Manager System
Full LoRA Manager integration transforms how you work with LoRAs:
Visual LoRA Selection - Instead of scrolling through endless lists of cryptic filenames, LoRA Manager displays thumbnail previews of each LoRA's effect. This visual approach is dramatically faster and more intuitive, especially for users who remember images better than text (a common trait in AuDHD/ADHD individuals).
Seamless Metadata Integration - LoRA Manager works hand-in-hand with the automatic metadata system, ensuring every LoRA you select is properly documented in your final images with correct weights and identifiers.
Per-Phase LoRA Management - Each phase has independent LoRA Manager support, allowing you to visually select and configure different LoRA sets for each refinement stage.
Advanced Memory Management
One of the workflow's critical production features is its intelligent resource management system, designed to handle large batch processing without crashes:
Model Unloaders - These specialized nodes automatically remove AI models from your graphics card's memory (VRAM) after they're no longer needed. Think of them as cleanup crew members who clear the stage between acts of a performance.
RAM Cleaners - These nodes clear your computer's main system memory, preventing the buildup of temporary data that can slow down or crash your system during long batch runs.
Strategic Placement - Memory management nodes are positioned at critical transition points in the workflow, ensuring optimal performance throughout the entire generation process.
Production-Scale Capability - Process 200-400+ images in single batches without VRAM exhaustion or system crashes. Run overnight generations with confidence.
Dual-Phase Architecture
This workflow separates image generation into two distinct phases, each capable of operating independently or in sequence:
Flexibility in Model Selection:
Use the same base model in both phases for ultra-refined, hyper-detailed results that push a single model to its limits
Use different base models in each phase to combine the strengths of multiple checkpoints, creating unique hybrid effects or achieving specialized refinements
Each phase can be completely disabled when not needed, giving you maximum workflow flexibility
LoRA Independence:
Apply the same LoRAs across both phases to intensify specific effects and details
Use completely different LoRA sets in each phase to layer multiple artistic styles or technical enhancements
Mix and match LoRAs strategically between phases for creative experimentation
Phase Control System
Each major section of the workflow can be enabled or disabled independently:
Phase 1 - Initial image generation
Phase 2 - Secondary refinement with optional model swap
FaceDetailer Groups - Specialized facial enhancement passes
Upscaling Sections - Resolution enhancement stages
This modular design lets you customize the workflow for different projects, from quick previews (Phase 1 only) to maximum quality renders (all phases enabled).
Workflow Components
Model Loading Nodes
CheckpointLoader - This is your starting point, loading the main AI model (checkpoint) that will generate your images. Think of this as loading the "brain" that understands how to create art.
LoraLoader - These nodes load additional specialized training files (LoRAs) that modify your base model's behavior. LoRAs are like giving your AI specific skills or artistic styles. You can stack multiple LoRA loaders to combine effects. When integrated with LoRA Manager, these nodes include visual preview capabilities for easy selection.
LoRA Manager Integration - Provides visual thumbnail previews of LoRA effects, making selection intuitive and fast. Instead of remembering cryptic filenames like "detail_enhancer_v2_final_ACTUALLY_FINAL.safetensors," you simply recognize the visual style you want. This is especially valuable for neurodivergent users who excel with visual memory.
VAELoader - Loads the VAE (Variational AutoEncoder), which is responsible for converting between the AI's internal representation and actual pixels you can see. A high-quality VAE ensures your final images have proper colors and sharpness.
Generation Control Nodes
KSampler - The heart of the image generation process. This node controls how the AI "dreams up" your image through a step-by-step refinement process. It takes your text description and gradually transforms random noise into a coherent image.
Critical Note: The workflow includes KSampler nodes in both Phase 1 and Phase 2. These are pre-configured for optimal performance with specific base models. When switching between different base models (especially across model families), you must adjust the KSampler settings - including steps, CFG scale, sampler type (like dpm_2, euler_a, dpmpp_3m_sde), and scheduler (like karras, simple, normal) - to match your chosen model's requirements. Each model family has its own optimal parameters.
EmptyLatentImage - Creates the initial blank canvas that your image will be generated on. The size you set here determines your base image dimensions.
CLIPTextEncode - Converts your text prompts into a mathematical language the AI can understand. You'll have separate nodes for positive prompts (what you want) and negative prompts (what you want to avoid).
Seed Node (easy seed) - Controls the random number generator that determines the uniqueness of each generated image. Using the same seed with identical settings produces identical results (perfect for reproducibility), while randomizing the seed creates variations. Essential for batch processing where you want diverse outputs or for A/B testing specific parameter changes while keeping other variables constant.
Wildcard Nodes - Enable dynamic prompt variation by randomly selecting from predefined lists of options. Instead of manually changing prompts between generations, wildcards let you define categories (like different poses, lighting scenarios, or style variations) and the workflow automatically randomizes selections for each image. This is essential for generating diverse content in large batches without manual intervention - perfect for maintaining variety across hundreds of images while keeping core elements consistent.
Refinement Nodes
FaceDetailer - A specialized tool that finds faces in your image and regenerates them with enhanced detail. This workflow uses FaceDetailer twice (once in each phase when enabled) for maximum facial quality. Each pass can use different detection settings and models.
Configured Settings: The workflow uses carefully tuned FaceDetailer parameters including 0.35 denoise strength, 20 steps, DPM_2 sampler with Karras scheduler, and 0.93 detection threshold. These settings have proven highly reliable across 98% of SDXL-family base models tested.
ImageUpscaleWithModel - Increases your image resolution using AI upscaling models. Unlike simple stretching, these models add intelligent detail as they enlarge your images.
Configured Model: The workflow uses ESRGAN_4x.pth for upscaling, which provides excellent detail enhancement while maintaining image coherence. This upscaler works consistently well across SDXL, Pony, Illustrious, and NoobAI model families.
Detection and Segmentation Nodes
UltralyticsDetectorProvider - Provides the AI models that can identify specific objects in images (like faces, hands, or bodies). This is what helps FaceDetailer know where faces are located.
Configured Model: Uses segm/skin_yolov8n-seg_800.pt for segmentation detection. This model excels at identifying skin regions and facial boundaries, which is critical for accurate FaceDetailer operations.
SAMLoader - Loads the Segment Anything Model (SAM), which can precisely outline objects in images. This helps isolate areas for detailed refinement.
Configured Model: Uses sam_vit_b_01ec64.pth with AUTO mode. This balanced SAM model provides excellent segmentation accuracy without excessive VRAM usage, making it ideal for batch processing workflows.
SegmDetectorSEGS - Uses segmentation models to identify and separate different regions of your image for targeted enhancement. Works in conjunction with the SAM and Ultralytics models to create precise masks for refinement.
Image Processing Nodes
VAEDecode - Converts the AI's internal image representation back into regular pixels you can view and save.
VAEEncode - Does the opposite - converts a regular image into the AI's internal format for further processing.
ImageScale - Resizes images to specific dimensions. Useful for preparing images between workflow phases.
Output and Organization Nodes
SaveImage - Saves your generated images to disk. This workflow has multiple save points, allowing you to capture results at different quality stages.
Subdirectory Node (PrimitiveString) - Allows you to organize saved images into custom subdirectories/folders. Instead of all images dumping into one folder, you can route different phases or image types to organized subfolders (like "Phase1_Base", "Phase2_Refined", "Final_Upscaled"). Critical for managing large batch outputs and keeping your production organized - especially valuable when processing hundreds of images across multiple projects.
Debug Metadata (LoraManager) - Captures and stores all the generation settings, LoRAs used, and other metadata within your images in A1111-compatible format. This is crucial for:
Tracking what settings produced specific results
Ensuring compatibility with image sharing platforms like CivitAI
Maintaining complete reproducibility of your best generations
Automatic LoRA information embedding without manual data entry
Professional workflow documentation
This node is what enables the seamless integration between LoRA Manager's visual selection system and the final image metadata, ensuring you never lose track of successful configurations.
Routing and Organization Nodes
ReroutePrimitive - Acts like a junction box, allowing you to send data from one node to multiple destinations without cluttering your workflow with crossing wires. These keep the workflow organized and readable.
Bypass Switches - Special nodes that let you toggle entire sections of the workflow on or off without disconnecting anything.
Memory Management Nodes
SoftModelUnloader - Intelligently removes AI models from VRAM after they've been used, freeing up graphics card memory for the next phase of processing.
easy clearCacheAll - Performs comprehensive memory cleanup, clearing both VRAM and system RAM caches to prevent memory buildup during batch processing.
Workflow Stages
Phase 1: Base Image Generation
The workflow begins with your chosen checkpoint model and LoRAs creating the initial image. This phase includes:
Text prompt processing
Initial image generation at your specified resolution
First FaceDetailer pass (optional)
Initial image save point
Transition Bridge
Between phases, the workflow includes:
Model unloading nodes to clear VRAM
Image format conversion if needed
Optional checkpoint swap preparation
Phase 2: Super Refinement
The second phase takes your base image and pushes it to the next level:
Loading of Phase 2 model (can be same or different)
Loading of Phase 2 LoRAs (can be same or different)
Image-to-image refinement at higher detail
Second FaceDetailer pass (optional)
Final upscaling pass
Final image saves
Cleanup Stage
After all generation is complete:
Comprehensive memory cleanup
Final model unloading
Cache clearing to prepare for next batch
Use Cases
Visual LoRA Experimentation Mode
Leverage LoRA Manager's visual previews to rapidly test different LoRA combinations. Perfect for users who think visually or have AuDHD/ADHD - recognize effects instantly without parsing filenames. The automatic metadata capture means every successful experiment is perfectly documented for future use.
Maximum Detail Mode
Enable all phases with the same model and LoRAs in both phases. Perfect for extracting every ounce of detail from a single checkpoint.
Hybrid Enhancement Mode
Use different models in each phase - for example, generate the base with a general-purpose model, then refine with a photorealism specialist.
Style Blending Mode
Apply artistic style LoRAs in Phase 1, then switch to technical enhancement LoRAs in Phase 2 for stylized but technically perfect results.
Batch Production Mode
Use strategic phase disabling to find your optimal quality-to-speed ratio for producing large volumes of images.
Memory Efficiency
The strategic placement of memory management nodes throughout this workflow allows you to:
Process large batches (50-200+ images) without running out of VRAM
Prevent system crashes from RAM exhaustion
Maintain consistent performance across extended generation sessions
Run high-resolution workflows on mid-range hardware
The workflow achieves this by clearing memory at every major transition point, ensuring each phase starts with maximum available resources.
Technical Advantages
Automatic Metadata System - Complete A1111-compatible metadata embedding eliminates manual documentation overhead. Every image is self-documenting with full provenance tracking, ensuring perfect reproducibility and professional CivitAI compatibility. Never lose track of successful configurations or waste time on manual parameter transcription.
Visual LoRA Management - Integrated LoRA Manager support eliminates the cognitive load of filename-based LoRA selection. Visual previews make experimentation faster and more intuitive, especially valuable for neurodivergent users who excel with visual memory over text-based recall. Seamless integration with metadata system ensures all LoRAs are properly documented.
Production-Grade Memory Optimization - Strategic model unloaders and RAM cleaners enable reliable batch processing of 200-400+ images without crashes. Run overnight generations with confidence, knowing your workflow won't exhaust VRAM or system memory mid-batch.
Modularity - Every major section can be independently enabled or disabled without breaking the workflow logic.
Scalability - Whether you're generating a single image or batches of hundreds, the memory management ensures consistent performance.
Flexibility - Swap models, LoRAs, and settings between phases without rebuilding the entire workflow.
Quality Control - Multiple save points let you compare results at different refinement stages to dial in your perfect settings.
Metadata Preservation - Full generation information is embedded in your saved images for reproducibility and platform compatibility.
Best Practices
Understand Your Base Model - Before diving in, familiarize yourself with your chosen base model's optimal KSampler settings. Check the model's documentation or community recommendations for steps, CFG, sampler, and scheduler values. The workflow's default settings may not be optimal for every model.
Leverage Visual LoRA Selection - Use LoRA Manager's thumbnail previews to quickly identify effects rather than memorizing filenames. This dramatically speeds up experimentation and reduces cognitive load.
Start Simple - Begin with Phase 1 only to test your prompt and settings before enabling full refinement. This lets you verify your base generation quality before investing time in dual-phase processing.
Monitor Memory - Watch your VRAM usage to determine if you need to disable optional stages for your hardware.
Experiment with Models - Try the same checkpoint in both phases first, then experiment with different combinations to discover unique effects. Remember to adjust KSampler settings when switching models.
Layer Your LoRAs Strategically - Use broad, general-purpose LoRAs in Phase 1 and specialized detail-enhancement LoRAs in Phase 2. The visual preview system makes it easy to build effective combinations.
Trust the Metadata System - The automatic metadata embedding means you never need to manually document successful settings. Every image is self-documenting.
Batch Processing - The memory management system shines during batch operations - let it run overnight for maximum productivity.
Conclusion
The Production Dual-Phase Refiner represents a professional-grade approach to AI image generation, built on three core pillars: visual LoRA management, automatic metadata embedding, and production-scale memory optimization. Whether you're creating single masterpiece images or running production batches, this workflow delivers exceptional results with professional reliability.
LoRA Manager integration transforms the LoRA selection experience from text-based filename hunting into fast, visual recognition - especially valuable for neurodivergent users who excel with visual memory. Automatic A1111-compatible metadata embedding ensures every image is self-documenting with complete provenance tracking, eliminating manual documentation overhead and providing perfect reproducibility. Strategic memory management enables crash-free batch processing at scale, with proven capability for 200-400+ image batches in single overnight runs.
Combined with flexible dual-phase architecture (supporting same or different models/LoRAs between phases) and complete modular control, this workflow adapts to virtually any creative or production scenario. From hobbyist experimentation to professional content creation with platforms like CivitAI, the Production Dual-Phase Refiner delivers quality, reliability, and an intuitive user experience that scales with your ambitions.
Battle-Tested Configuration: The workflow comes pre-configured with carefully tuned settings (ESRGAN_4x upscaling, optimized FaceDetailer parameters, balanced SAM/YOLO detection models) that have achieved 98% success rates across SDXL, Pony, Illustrious, and NoobAI model families. These aren't theoretical settings - they're production-proven configurations refined through extensive real-world batch processing.




















