Update README.md (batch 1/1)

This commit is contained in:
systemd
2026-02-24 13:17:22 +00:00
parent 7db84af61e
commit 0c5571c920

View File

@ -14,60 +14,10 @@ tags:
![Teaser](./editing.jpg)
![Teaser](./others.jpg)
The FLUX.2 [klein] model family are our fastest image models to date. FLUX.2 [klein] unifies generation and editing in a single compact architecture, **delivering state-of-the-art quality with end-to-end inference in as low as under a second**. Built for applications that require real-time image generation without sacrificing quality, and runs on consumer hardware, with as little as 13GB VRAM.
FLUX.2 [klein] 4B Base is a 4 billion parameter rectified flow transformer capable of generating images from text descriptions and supports multi-reference editing capabilities.
It's a full-capacity foundation model. Undistilled, preserving complete training signal for maximum flexibility. Ideal for fine-tuning, LoRA training, research, and custom pipelines where control matters more than speed. Higher output diversity than the distilled models.
This repository holds an FP8 version of FLUX.2 [klein] Base 4B. The main repository of this model (full BF16 weights) can be found [here](https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B).
`FLUX.2 [klein] 4B Base` is a 4 billion parameter rectified flow transformer capable of generating images from text descriptions and supports multi-reference editing capabilities.
For more information, please read our [blog post](https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence).
# **Key Features**
1. Exceptional speed and quality-to-size ratio.
2. Ideal for local deployment and fine-tuning on limited hardware.
3. Trained without step or guidance distillation, making FLUX.2 [klein] 4B Base more efficient and flexible.
4. Open weights for customization and fine-tuning to drive science, research, and empower artists to iterate with speed.
5. Outputs can be used for commercial purposes, as described in the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
# **Usage**
We provide a reference implementation of FLUX.2 [klein] 4B Base, as well as sampling code, in a dedicated [GitHub repository](https://github.com/black-forest-labs/flux2). Developers and creatives looking to build on top of FLUX.2 [klein] 4B Base are encouraged to use this as a starting point.
FLUX.2 [klein] 4B Base is also available in both [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and [Diffusers](https://github.com/huggingface/diffusers).
## **Using with Diffusers 🧨**
To use FLUX.2 [klein] 4B Base with the 🧨 Diffusers python library, first install or upgrade diffusers:
```python
pip install -U diffusers
Then you can use Flux2KleinPipeline to run the model:
import torch
from diffusers import Flux2KleinPipeline
device = "cuda"
dtype = torch.bfloat16
pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-base-4B", torch_dtype=dtype)
pipe.enable_model_cpu_offload() # save some VRAM by offloading the model to CPU
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=4.0,
num_inference_steps=50,
generator=torch.Generator(device=device).manual_seed(0)
).images[0]
image.save("flux-klein.png")
```
This repository holds an [FP8 version](https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4b-fp8/blob/main/flux-2-klein-base-4b-fp8.safetensors) of FLUX.2 [klein] 4B Base. The main repository of this model (full BF16 weights) can be found [here](https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B).
---
Limitations