This guide covers how to train LoRAs and finetune FLUX.2 [klein] models on your own datasets. With open weights available under Apache 2.0 (4B) and FLUX Non-Commercial License (9B), you can create custom models tailored to your specific needs.
Overview
FLUX.2 [klein] Base models are ideal for finetuning due to their undistilled architecture, which preserves the full training signal. This makes them perfect for:- LoRA Training: Lightweight adapters for style transfer and character consistency
- Full Finetuning: Complete model adaptation for specialized domains
- Research: Experimentation with novel training techniques
Why Train FLUX.2 [klein] Models?
Style Transfer
Create custom artistic styles that can be applied to any subject matter. Perfect for consistent branding or artistic projects.
Character Consistency
Train models to generate specific characters or people with consistent features across different scenes and poses.
Domain Specialization
Adapt models for specialized domains like medical imaging, technical illustrations, or specific art movements.
Concept Learning
Teach the model new concepts, objects, or visual patterns not well-represented in the base training data.
Community Tools
Open-source frameworks provide full control over the training process:AI-Toolkit
All-in-one training suite with GUI and CLI. Optimized for consumer GPUs with 12GB+ VRAM.
Diffusers
Official Hugging Face library with DreamBooth and LoRA training examples for FLUX.2.
Model Variants
Choose the right [klein] variant for your use case:| Variant | Best For | License |
|---|---|---|
| klein 4B Base | Quick iterations | Apache 2.0 |
| klein 9B Base | Maximum quality, complex concepts | FLUX Non-Commercial |
Base models are undistilled and provide higher output diversity, making them ideal starting points for fine-tuning. The 4B variant is recommended for most users due to lower hardware requirements.
System Requirements
Minimum Hardware
For klein 4B Base Training
- GPU: NVIDIA with 12GB VRAM (RTX 3060 12GB, RTX 4060 Ti 16GB)
- RAM: 32GB system memory
For klein 9B Base Training
- GPU: NVIDIA with 22GB VRAM (RTX 3090, RTX 4090)
- RAM: 64GB system memory
Training Types
LoRA Training
Low-Rank Adaptation (LoRA) is the most popular training method:- β Lightweight (typically 10-200MB)
- β Fast training (1-3 hours on consumer GPUs)
- β Easy to share and combine
- β Minimal hardware requirements
Full Fine-tuning
Complete model adaptation for maximum control:- β οΈ Large file sizes
- β οΈ Longer training times (days to weeks)
- β οΈ High-end hardware recommended
- β Maximum flexibility and quality
Getting Started
Step-by-Step Training Example
Follow our complete hands-on guide with a real dataset example. Learn how to prepare data, configure training, and use your trained LoRA.
Quick Start Resources
Download Base Weights
Get FLUX.2 Klein base models from Hugging Face.
Klein Prompting Guide
Learn how to prompt Klein models effectively.
Training Best Practices
Dataset Preparation
Image Quality
Image Quality
- Use high-resolution images (1024px or higher)
- Ensure consistent quality across all training images
- Remove artifacts and low-quality samples
Caption Writing
Caption Writing
- Use descriptive, detailed captions
- Include your trigger word consistently:
[trigger] - Describe everything visible except the style/concept you want to teach the model
Dataset Diversity
Dataset Diversity
- Vary poses, angles, and compositions
- Include different lighting conditions
- Mix close-ups with full scenes
- Avoid repetitive backgrounds
Training Parameters
Learning Rate
Learning Rate
- LoRA Training: 8e-5 to 1e-4
- Full Fine-tuning: 1e-5 to 5e-5
- Lower rates for style, higher for characters
Training Steps
Training Steps
- Style LoRAs: 1500-2500 steps
- Character LoRAs: 1500-3000 steps
- Monitor sample outputs to avoid overfitting
Resolution
Resolution
- Start with 512px for faster iterations
- Use 1024px or higher for final training
- Use higher Resolution if you want to capture Macrodetails
Using Your Trained LoRA
After training, you can use your LoRA with various tools:- Python (Diffusers)
- ComfyUI

