Uncensored Portable Ai Prompt Assistant for Wan, SDXL, Flux.1, and more

Details

Model description

Portable Ai Prompt Assistant, a free, open-source desktop application I built to do just that. It's a simple tool that runs entirely on your local machine, giving you a private and versatile environment to supercharge your creative workflow.

GITHUB-PROJECT-LINK

Why Did I Build This?

While online services are great, I wanted a tool that offered more control, privacy, and flexibility. I needed an application that could:

  • Run locally without sending my images to a third-party server.

  • Interface with the powerful open-source models I already use with Ollama and LM Studio.

  • Compare outputs from different models side-by-side.

  • Be tailored specifically for generating high-quality prompts for various AI art models.

  • Write prompt for uncensored images.

The Image-to-Prompt AI Assistant is the result of that vision.

A Prompt Generation Powerhouse for Any Model

One of the key design goals was to create a tool that isn't locked into one ecosystem. The prompts generated by this app can be finely tuned to work with a huge variety of text-to-image models.

By using custom System Prompts, you can instruct the AI to generate prompts specifically formatted for:

  • SDXL & SD 1.5/2.1: Create detailed, comma-separated prompts with keywords and negative prompts.

  • Stable Diffusion 3 (and Flux.1): Generate prompts that leverage the newer, more descriptive natural language understanding of these models.

  • DALL-E 3: Craft conversational, sentence-based prompts.

  • Midjourney: Produce stylistic and artistic prompts focusing on mood and composition.

  • Wan2.1: Create a video from a reference image or optimize text for use with T2V.

You can save your favorite system prompts—like "Create a cinematic SDXL prompt" or "Generate a simple anime-style prompt"—and switch between them with a single click.

(Compare outputs or set the perfect system prompt for your target model.)

Core Features

This isn't just a basic interface; it's packed with features designed for a smooth and powerful user experience.

  • Local First, Privacy Always: Works with your local Ollama or LM Studio server. Your images and prompts never leave your machine.

  • Multi-Model Comparison: Select several vision models and get a response from each one simultaneously. See which AI gives you the best description!

  • Advanced Model Management (Ollama): Unload models from memory after a response to free up VRAM, either manually or automatically.

  • Image-Only Analysis: Don't have a specific question? Just upload an image and click "Analyze" to get an instant description.

  • Full User Control: A "Stop Generating" button lets you interrupt long responses at any time.

  • Conversation History: View your entire session and export it as a .txt or .json file for your records.

  • Customizable Workflow: Save and reuse an unlimited number of custom system prompts to tailor the AI's output to your exact needs.

Getting Started is Easy

The application is built with Python and Streamlit, and setting it up is simple.

Prerequisites:

  1. Windows 10 or 11.

  2. Ollama or LM Studio (recommend) installed and running.

  3. A vision-capable model (like llava, Gemma 3, etc.) downloaded and loaded.

🚀 Installation Steps

  1. Download the compressed file and extract its contents.

  2. Open the AiPromptAssistant setup file.

  3. Click Install to begin the installation.


    4. Download and install LM Studio from the following link: https://lmstudio.ai/


⚙️ How to Use the App

After installation, follow these steps:

Step 1: Download a Vision-Capable Model in LM Studio

  1. Navigate to the Model Search: Open LM Studio and click on the search icon (magnifying glass) in the left-hand menu to go to the model search page.

  2. Search for a Model: In the search bar at the top, type the name of the model you want to download. To find models that can process images, you can search for terms like "vision". The images show a search for "gemma," a family of models from Google. The results will show various versions of the model.

  3. Select and Download the Model: Look for a model that has vision capabilities, often indicated by a "Vision" tag. Click on the model from the search results to see its details on the right side of the screen. Under the "Download Options," you will find different available files. Choose a suitable version and click the "Download" button to save it to your computer.
    Uncensored models that I use:
    https://model.lmstudio.ai/download/concedo/llama-joycaption-beta-one-hf-llava-mmproj-gguf

    https://model.lmstudio.ai/download/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF

    https://model.lmstudio.ai/download/mlabonne/gemma-3-12b-it-abliterated-GGUF

Step 2: Enable the API Server in LM Studio

  1. Select Power User or Developer View: To access the local server settings, you must first enable the appropriate view. At the very bottom of the LM Studio application, click on either "Power User" or "Developer".

  2. Go to the Local Server Tab: In the left-hand menu of LM Studio, click on the icon that looks like <_> to open the local server settings.

  3. Start the Server: At the top of this screen, you will see a toggle switch next to "Status." Click it to change the status from "Stopped" to "Running." This will start the local API server.

  4. Set the Server Port: In the server settings, you can specify the port number for the API. In the example image, the port is set to "1234". You can use this default or change it to another available port.

Step 3: Connect Your Application to the LM Studio API

  1. Open AI Prompt Assistant.

  2. Configure the API Endpoint: In your application's settings, find the "Configuration" section. Here, you will need to enter the API Base URL. This URL should point to your local machine (localhost) and the port you configured in LM Studio.

  3. Enter the API Base URL: In the "API Base URL" field, type http://localhost: followed by the port number you set in LM Studio. For example, if you used port 1234, you would enter http://localhost:1234. This tells your application where to send requests to interact with the model running in LM Studio.

⚠️ Important Notice

When you download and install the program, you may see several security warnings. This happens because the package includes an .exe file, and it’s completely normal for any app that’s downloaded outside of official app stores or from lesser-known developers.

So please don’t worry—this does not mean the program contains a virus. As with any file you download from the internet, it’s always a good idea to scan it with a trusted antivirus before installing, and then simply ignore the warnings.

If you don’t feel comfortable installing an .exe application, you can always try the open-source Python version instead. Keep in mind, though, that it requires a few extra setup steps and doesn’t include some of the additional features. You can find it here:
👉 Open-Source Python Version

Updates

  • V1.3: Added video upload and analysis support for Google's Gemini models using Files API integration.

  • V1.1: Added a "Bulk Analysis" tab to analyze all images in a selected folder. You can now enter a folder path and optionally save the generated prompts to text files in the same directory.

  • V1.2: Added support for Google's Gemini Flash models (1.5, 2.0, and 2.5).

Images made by this model

No Images Found.