ANY IMAGE REPLACER
Details
Download Files
About this version
Model description
I can't make the fix right now but I found a mistake. Under the face detailer node for the single person (not the two people) you'll see the model connector. This is pulling the wrong model and should be pulling the LORA-loaded model. Change the model input "get node" to "PERSON 1_LORA MODEL_0" to get better results. Do not change the two-person face detailer since it's using two separate LORAs.
This workflow allows you to upload any image and replace the character with your LORA character, preserving most of the original image style, colors, features, and poses. I say any image but what I mean is any image that shows either one or two people. There is probably a way to do 3 or more people but it would be tedious to make all of the mask detections and constantly have to sort through which ones are being swapped.
FULL DISCLAIMER: This works most of the time and produces about 90% quality with the right settings.
This isn’t perfect and depending on what image you’re uploading, detection isn’t always correct, especially where there are partial people in the picture, the pose is inherently complex or there is ...ahem…”exposed anatomy” that z-image isn’t great at yet.
ARTIFACTS ARE COMMON - Play around with character variability to see if you can find the right combination. This stems from the use of an existing image.
As good as z-image turbo is, you are forcing it through a picture it did not create so it fills in the blanks. You might see:
- Moisture or water beads
- warped, wrinkly, or deformed skin textures
- Misformed body parts
- Cutouts or missing blank spaces
- Random items, straps, or objects
Just tweak the CHARACTER VARIABILITY to see what changes work.
I do not recommend uploading images that are over 2400 x 2400 or you will run out of memory. Throw in an image resize node before processing a large image.
I have put key notes in the workflow that explain how to tweak the workflow with some adjustments that need to be made if the output or masking isn’t correct.
Additionally, I’m using the Qwen3-VL nodes. These models will take a while to load the first time you use them. They are set to unload to your CPU when they’re done but can be easily recalled in the same session. There are VRAM memory clearing nodes throughout the workflow so I don’t recommend toggling to keep the model on your GPU as it will need to be reloaded the next time you run. While slower, the CPU offloading will ensure a balance of freeing up VRAM usage during the run.
There are a lot of, erm, spicy websites that have images on them. As replicating someone’s likeness has its challenges, reusing watermarked images for distribution purposes adds to that. This is a personal use workflow.
Every now and then, in all of my workflows, z-image produces black images. Just restart your session if this happens to you, as this is what works for me.
Per usual, I have no issues with anyone reusing or reproducing elements from this workflow but if you see inefficiencies or find better settings, let me know so I can update the post.
Enjoy!
