NewBie image

Details

Download Files

Model description

NewBie image Exp0.1

🧱 Exp0.1 Base

  • NewBie image Exp0.1 is a 3.5B parameter DiT model developed through research on the Lumina architecture.

    Building on these insights, it adopts Next-DiT as the foundation to design a new NewBie architecture tailored for text-to-image generation.

    The NewBie image Exp0.1 model is trained within this newly constructed system, representing the first experimental release of the NewBie text-to-image generation framework.

Text Encoders

  • We use Gemma3-4B-it as the primary text encoder, conditioning on its penultimate-layer token hidden states. We also extract pooled text features from Jina CLIP v2, project them, and fuse them into the time/AdaLN conditioning pathway. Together, Gemma3-4B-it and Jina CLIP v2 provide strong prompt understanding and improved instruction adherence.

VAE

  • Use the FLUX.1-dev 16channel VAE to encode images into latents, delivering richer, smoother color rendering and finer texture detail helping safeguard the stunning visual quality of NewBie image Exp0.1.

Prompt

  • XML structured prompt

  • Natural language prompt

  • Tag prompt

🖼️ Task type

NewBie image Exp0.1 is pretrain on a large corpus of high-quality anime data, enabling the model to generate remarkably detailed and visually striking anime style images.

We reformatted the dataset text into an XML structured format for our experiments. Empirically, this improved attention binding and attribute/element disentanglement, and also led to faster convergence.

Besides that, It also supports natural language and tags inputs.

🧰 Model Zoo

NewBie image Exp0.1: Hugging face | modelscope

Gemma3-4B-it: Hugging face | modelscope

Jina CLIP v2: Hugging face | modelscope

FLUX.1-dev VAE: Hugging face | modelscope

💪 Training procedure

🔬 Participate

Core

Members

✨ Acknowledgments

  • Thanks to the Alpha-VLLM Org for open sourcing the advanced Lumina family. which has been invaluable for our research.

  • Thanks to Google for open sourcing the powerful Gemma3 LLM family

  • Thanks to the Jina AI Org for open sourcing the Jina family, enabling further research.

  • Thanks to Black Forest Labs for open sourcing the FLUX VAE family. powerful 16channel VAE is one of the key components behind improved image quality.

  • Thanks to Neta.art for fine-tuning and open sourcing the Lumina-image-2.0 base model. Neta-Lumina gives us the opportunity to study the performance of Next-DiT on Anime Types.

  • Thanks to DeepGHS/narugo1992/SumomoLee for providing high-quality Anime Datasets.

  • Thanks to Nyanko for the early help and support.

📖 Contribute

  • Neko, 衡鲍, XiaoLxl, xChenNing, Hapless, Lius

  • WindySea, 秋麒麟热茶, 古柯, Rnglg2, Ly, GHOSTLXH

  • Sarara, Seina, KKT机器人, NoirAlmondL, 天满, 暂时

  • Wenaka喵, ZhiHu, BounDless, DetaDT, 紫影のソナーニル

  • 花火流光, R3DeK, 圣人A, 王王玉, 乾坤君Sennke, 砚青

  • Heathcliff01, 无音, MonitaChan, WhyPing, TangRenLan

  • HomemDesgraca, EPIC, ARKBIRD, Talan, 448, Hugs288

🧭 Community Guide

Getting Started Guide

LoRa Trainer

💬 Communication

📜 License

  • Model Weights: Newbie Non-Commercial Community License (Newbie-NC-1.0).

    Applies to: model weights/parameters/configs and derivatives (fine-tunes, LoRA, merges, quantized variants, etc.)

    For Non Commercial use only, and must be shared under the same license.

    See NewBie-image-Exp0.1 LICENSE.md

  • Code: Apache License 2.0.

    Applies to: training/inference scripts and related source code in this project.

    See Apache-2.0

⚠️ Disclaimer

This model may produce unexpected or harmful outputs. Users are solely responsible for any risks and potential consequences arising from its use.

Images made by this model

No Images Found.