Helltaker Style Illustrious

Details

Model description

Starting Prompt: (Quality & Style Prompts)

masterpiece, best quality, amazing quality, flat_color, sharp, (2d, cartoon, toon \(style\):1.2), 
(shiny, bloom, glossy, vibrant, edge lighting, volumetric lighting, rim lighting),

Style Prompt/Lora Model Weight: 0.7 - 0.8 works well

<lora:Offical_Helltaker_Style:0.8>, helltaker_style

[Your own prompts here].

I use character prompts so they are at the bottom, and any actions, outfits, or backgrounds are placed in the middle.

Creator Notes:

I’ve noticed that many people training on a style still tag their images with detailed tagging. You don’t need to do this. You can use an activation prompt, or skip it entirely and avoid using an activation tag.

When you train on a style, you’re not training on concepts (like poses, characters, outfits, etc.). Concepts depend on both the UNet and the Text Encoder. Styles, however, rely solely on the UNet, which handles pattern and color recognition.

The Text Encoder connects the tags you assign to the images during training, helping the model understand what’s in each image. This isn’t necessary for style training because styles are purely visual—you don’t need to tell the model what objects or subjects are present. Plus, a style affects the entire image, not just a portion of it.

***However, if you do tag the images normally—like adding "1boy," "1girl," or every detail found in the image—the model will start associating those tags with the style itself. During inference, when you use those tags in a prompt, the model might prioritize generating content tied to them (e.g., boys or girls) alongside the style, even though you only meant to capture the visual aesthetic. This can muddy the training, blending concept learning into what’s supposed to be a style-only focus. Without those tags, the style stays more flexible and applies broadly, regardless of the subject.

That said, some character models do influence the style of an image. This happens because they’re leveraging both the UNet and the Text Encoder together during training.

***Example:

Let's say that someone trained on a hentai creator's style, and there's missionary and it's in the dataset shown multiple times and it was tagged normally, now whenever someone else uses that style model and they start making missionary it will then replicate the look of missionary from the dataset, this is because the dataset was tagged normally and ruins the flexibility of the style model and people are stuck with having the same missionary position in the same perspective.

Images made by this model

No Images Found.