LLM prompt helper (with FLUX.1 Kontext support)

Details

Model description

Starting from version 2.0, the workflow support txt2img, img2img, Inpaint functionality and uses the built-in LLM node

https://github.com/AlexYez/comfyui-timesaver

instead of the external Ollama program. TS_Qwen3_Node node can describe images, translate prompts and enhance prompts.

If your operating system is Windows and you can't install Qwen3_Node dependencies (don’t have a compiler installed), try to download the .whl file from

https://github.com/boneylizard/llama-cpp-python-cu128-gemma3/releases

then close ComfyUI, open the python_embeded folder, type cmd in the address bar, and execute the following command.

.\python.exe -I -m pip install "path to downloaded.whl file"

after installing you can run ComfyUi and install missing custom nodes as normal way.

Edit: If .whl install fails, check your Python version and make sure that .whl was build for this version. If it is still fails, try to open .whl as archive and just extract all folders from archive to python_embeded\Lib\site-packages folder

===Old versions ===============================

This workflow combine power of LLM text models managed by Ollama with Flux image generation. It takes image or text as input, improve prompt or change it according to instructions.

Note: To refresh LLM models list you need to reload browser window by pressing F5 key.

Since 1.8 there is a blue switcher in Generate Image group to enable or disable context support.

Since 1.3 you need to switch blocks on and off and manually copy prompt text between blocks.

Information:

First of all you need to download and install Ollama from

https://ollama.com/

In current workfow we use 2 LLM models:

Img2Img use llava for image tagging and Mistral for manipulations

Combined 1.3 use llava and phi4

Txt2Img 1.2 use only phi4

Txt2Img 1.1 use only Mistral

Before running Comfy you need to download models:

open command prompt from Ollama folder (with ollama.exe) and say

ollama pull llava:7b (if you have 8-12 Vram)

or

ollama pull llava:13b (for 16+ Vram)

and wait for model download and say For img2img and Txt2Img v.1.1

ollama pull mistral-small

For Txt2Img v.1.2 and combined 1.3 use

ollama pull phi4

After download finished start ollama app.exe, wait for tray icon, start Comfy and install missing custom nodes.

If not set, select llava in Ollama Vision node and mistral in Translate and Ollama Generate Advance nodes.

If you plan to give IMG2IMG instructions in other language turn on and use Translate node.

TXT2IMG take as prompt any language

====================

For Redux IP Tools version you need to download 2 models:

Clip Vision -> models\clip_vision

Style model -> models\style_models

Images made by this model

No Images Found.