Style-96

Details

Download Files

Model description

Final-2 (update, for the original story, see below)

You know it is never the last version when you write "final" in its name :D

I re-did a training and an alternate merge thanks to a great remark by @Neural_Lens

First, i lowered the LR and did more Epoch, switched back to the classical cosine and Prodigy as optimizer. I also reduced the dimension (16) but choose a lower alpha (4) to get more impact on value change.

{
  "engine": "kohya",
  "unetLR": 0.0001,
  "clipSkip": 2,
  "loraType": "lora",
  "keepTokens": 0,
  "networkDim": 16,
  "numRepeats": 2,
  "resolution": 1024,
  "lrScheduler": "cosine",
  "minSnrGamma": 0,
  "noiseOffset": 0.03,
  "targetSteps": 2400,
  "enableBucket": true,
  "networkAlpha": 4,
  "optimizerType": "Prodigy",
  "textEncoderLR": 0,
  "maxTrainEpochs": 20,
  "shuffleCaption": true,
  "trainBatchSize": 4,
  "flipAugmentation": false,
  "lrSchedulerNumCycles": 1
}

On top of that, i leveraged for training Neural Lens Core which feel like the perfect Illustrious model for training, much better that the base Illustrious V0.1 or a random WAI iteration.

After some test, i selected several Epoch:

And did a new merge to see if TIES and DARE could work:

from safetensors.torch import save_file
import sd_mecha as sdm

sdm.set_log_level()

base = sdm.model("test3-000010.safetensors")

models = [
  sdm.model("test3-000012.safetensors"),
  sdm.model("test3-000018.safetensors"),
  sdm.model("test3-000019.safetensors"),
]

recipe_dare = sdm.ties_with_dare(base,*models,probability=0.6,seed=42,alpha=0.5)

test = sdm.merge(recipe_dare,output_device="cpu")

for k in [ k for k in test.keys() if "lora_te" in k ]:
  del(test[k])

save_file(test,"test3-dare.safetensors")

At this point, when i did the merge, i saw that using prodigy did not honour my TE LR of 0, that's why i am removing the TE layers.

In any case, it works differently, is also nice and if you hesitate between both, you can just slap the two together :D

First and Last

So, a quick test. Goal was to replicate a specific style, especially for eyes and lips. After finding a combinaison of LoRA, i generated a bunch of random 1girl (60 pictures), did a pass of autotag and started a quick training to spend some blue buzz.

The training parameters:

{
  "engine": "kohya",
  "unetLR": 0.0005,
  "clipSkip": 2,
  "loraType": "lora",
  "keepTokens": 0,
  "networkDim": 32,
  "numRepeats": 2,
  "resolution": 1024,
  "lrScheduler": "cosine_with_restarts",
  "minSnrGamma": 0,
  "noiseOffset": 0.03,
  "targetSteps": 1440,
  "enableBucket": true,
  "networkAlpha": 32,
  "optimizerType": "Adafactor",
  "textEncoderLR": 0,
  "maxTrainEpochs": 12,
  "shuffleCaption": false,
  "trainBatchSize": 1,
  "flipAugmentation": false,
  "lrSchedulerNumCycles": 3
}

Nothing fancy (PS: i know those aren't the best parameters, i was also doing some testing :D)

I selected both epoch 7 and 9 that looked nice, and why i am posting this is because, i often merge LoRA training epochs.

I know that's not something that is often done, i usually do it manually, for once, i tested a technique that can be reproduced:

>>> import sd_mecha as sdm
>>> sdm.set_log_level()
>>> a = sdm.model("test-000007.safetensors")
>>> b = sdm.model("test-000009.safetensors")
>>> c = sdm.slerp(a,b)
>>> sdm.merge(c,output="test-slerp.safetensors")

Anyway, that's all folks. :D

Base idea is here (for the source LoRA): https://civitai.com/images/109655271

It was trained on PrefectIllustrious V3.

Images made by this model

No Images Found.