Quantized CyberRealistic Ponyv7 GGUF
Details
Download Files
About this version
Model description
GGUFs were quantized, and CLIP versions were extracted from this model: /model/443821?modelVersionId=1177183
Look there for any usage tips and settings.
The model was quantized using the Beginner-friendly Colab Notebook for SDXL Unet+Clip Extraction and GGUF Conversion by the marvellous old_fisherman. It was surprisingly easy and only required a free google account.
Quantization drastically shrinks the size and therefore VRAM requirements of the model. If you only have 3GB or 4GB of VRAM, this may be of interest to you. old_fisherman also posted a Modular SDXL ControlNet Workflow for a potato PC to go with the models.
Quality suffers minimally for Q8, often not noticeably at all.
Q5_K is noticeably worse than the full model, but not by a lot. (Bad at text, though.)
You need to load VAE, CLIP-L and CLIP-G separately. I’ve uploaded CLIP-L and CLIP-G with the model, but you will need a PONY VAE of your choice.
Sample images are my first try of the prompts using the 2-step Lightning LoRa, so they are very much not the best the quantized model can do. In fact, I challenge you to make a better picture with this model and publish it. Shouldn’t be too hard.
