LLM Prompt + Assist

Details

Download Files

Model description

With all the talk about LLM being implemented into CivitAI's systems. I thought I'd give everyone the core part of the LLM section of my Workflow.

The focus on this was more about the prompt and LLM modifications of the prompt. The last part of the workflow (The actual generation of the image) is a very basic very simple workflow. The idea was to allow the core LLM part of the prompt generation to be adapted to any workflow you'd like. I made it in such a way you could adapt it into your favorite workflows.

A few comments on the LLM Nodes:-

Each LLM node/model can be directly "spoken" to. Whatever you give, it'll answer. So you can actually ask it questions like "Do you think you'll take over the world?" for example, and depending on your model it'll answer. So keep that in mind when giving the LLM nodes instructions. I've given some examples on what to put in the LLM Nodes, but you'll get the idea after you play with them a little. "Describe", "Imagine", "Think about" etc. are all your friends here.

How it works and why it's setup this way

There is a main prompt which you can put whatever you like in there or leave it blank. Anything you put in there will either be read direct or be evaluated by the LLM Clean Assist node. This is important because it can give you flexibility. You don't need to use the LLM Clean Assist node I'll explain below.

The two LLM nodes are setup to ask the LLM different things. Why is this important? Well if you ask the LLM to "Describe a female elf wizard sitting in a dungeon" it'll give you a description of that, but it'll most likely also give motives, why they are in the dungeon the female elf wizards emotions being in there.

See LLM's especially story telling or RP LLM try to craft a reasonable response that covers all the elements you give it. I found the easiest way to prevent that is to give the LLM a very narrow focus. By splitting up the elements you want to include into separate tasks to the LLM, you get better results on the descriptions of that element.

Try to break up the scene to give each LLM it's own focus. Using the description above, breaking it down into two separate LLM queries helps produce a better image. LLM One would be "Describe what a dark dungeon would look like" LLM Two would be "Describe what a female elf wizard looks like". The prompt merger will join those two descriptions together. Then comes the LLM Clean Assist node.

LLM Clean Assist Node:- (Active)

This is the final parsing LLM, the one that usually brings it altogether. This basically takes all the data from the previous nodes, reads it, and generates an output which becomes the final prompt.

LLM Clean Assist Node:- (De-active)

With it de-activated it'll simply push the entire prompt from the merge to the image generation.

Just a few notes:

This does not include any LLM files. But I have notes in the Workflow on a few models that might help you. That said, any instruct or chat LLM can be used as long as it's a GGUF format. I have instructions in the workflow.

Images made by this model

No Images Found.