( Love Live! Nijigasaki ) Ousaka Shizuku F.1 Dev LoRA
Details
Download Files
Model description
Ousaka shizuku F.1 D LoRA Model
Training datasets are all from some card images include this character.
For some reasons, I couldn't let the quality as high as the official card images, but the basic characteristics can still be shown by strictly following the prompts below:
<lora:ousaka-shizuku:1>, ousaka shizuku, 1girl, solo, blue eyes, brown hair, long hair, etc.
Or just checking out the prompts of preview images I have given.
Tip: In the future, I will only keep the main trigger words and remove the basic facial characteristic tags so that you can only use the trigger word "ousaka shizuku" to generate the character in the right way, maybe.
In my workflow using svdquant, you can even do not use <lora:ousaka-shizuku:1> and just input the trigger word to use the LoRA. (However, If u still use stable-diffusion-webui, then you can ignore this paragraph.)
In my test using svdquant, I used the weight value 1 instead of 1.25 and the quality can still be good. (This conclusion should be considered.)
Use DeepBooru(Danbooru) Words instead of natural language prompts. Because I did not use the Florence-2 model for tagging.
The base model is obviously FLUX.1 Dev, so it may be not good to use other F.1 models, I think.
(I am still a NEWBIE here.)
使用说明
你可以使用触发词+正确的特征描述词(未来我可能会出懒人版,届时会将基本特征描述词移除,只需一个触发词也可能解决特征问题)来正确生成角色:
<lora:ousaka-shizuku:1>, ousaka shizuku, 1girl, solo, blue eyes, brown hair, long hair, etc.
请使用DeepBooru的词汇作为prompt,而不是使用自然语言的描述,因为本人偷懒并没有采用Florence-2这个模型去打标(废话
底模仍然是FLUX.1 Dev,别又双叒叕搞错了(
关于本人利用的工作流的测试结论
先前的需要权重1.25结论其实可以被推翻,在我的svdquant测试当中发现,LoRA是单独被一个svdquant的node加载,并不需要在Clip Text里输入lora:ousaka-shizuku:1。
但是如果你使用的并非该工作流或者LoRA不是单独放在一个类似Load LoRA的node加载(比如传统的stable-diffusion-webui-forge等),你可以忽略这一条,但是权重可以先减小到1了。
一些训练参数
考虑到本人笔电性能不佳,目前采用的方案是尽可能减少总的训练步数(Total Steps < 1k)来提高效率,但相对于那些硬件条件充足的大佬们而言,我这里还是太太太慢了(悲
加上平时压根没时间弄……唉,见谅吧。别看我进站已经至少一年多了,但我在AIGC方面怕仍是新手





