hyper bottm heavy SDXL
Details
Download Files
About this version
Model description
This is a test LoRA of my bottomheavy dataset, but trained on SDXL. I wanted to see how SDXL would handle the same data/config as my last released model.
The results came out better than I expected and it was pretty quick to train compared to SD1, so I'm uploading it just for fun. Still, don't expect to much from this LoRA. This was a low effort training attempt. And I have no idea how to prompt for SDXL yet, so keep that in mind.
See Training Data for tags list
Training Findings:
It seems that SDXL does learn the concept faster than SD1, my last bottomheavy model trained way longer with similar config. 16 epocs vs 160 epocs (but this model is under trained probably)
A lower network_dim seems to work fine, I will probably try even lower than 16 later.
Training per iteration is about ~2-3x slower on SDXL than it was for SD1, but the speed that SDXL learns the concept makes it somewhat competitive.
Didn't necessarily need text encoder training to get decent results. On SD1 I would have to train for much, much longer if I did not use text encoder training. Here it learned the concept pretty well without.











