mirror of
https://www.modelscope.cn/black-forest-labs/FLUX.2-klein-4b-fp8.git
synced 2026-04-02 10:52:54 +08:00
Update README.md (batch 1/1)
This commit is contained in:
61
README.md
61
README.md
@ -14,67 +14,10 @@ tags:
|
||||

|
||||

|
||||
|
||||
The FLUX.2 [klein] model family are our fastest image models to date. FLUX.2 [klein] unifies generation and editing in a single compact architecture, **delivering state-of-the-art quality with end-to-end inference in as low as under a second**. Built for applications that require real-time image generation without sacrificing quality, and runs on consumer hardware, with as little as 13GB VRAM.
|
||||
|
||||
FLUX.2 [klein] 4B is a 4 billion parameter rectified flow transformer capable of generating images from text descriptions and supports multi-reference editing capabilities.
|
||||
|
||||
Fully open under Apache 2.0. Our most accessible model runs on consumer GPUs like the RTX 3090/4070. Compact but capable: supports text-to-image, image editing, and multi-reference at quality that punches above its size. Built for local development, edge deployment, and production use.
|
||||
|
||||
This repository holds an FP8 version of FLUX.2 [klein] 4B. The main repository of this model (full BF16 weights) can be found [here](https://huggingface.co/black-forest-labs/FLUX.2-klein-4B).
|
||||
|
||||
`FLUX.2 [klein] 4B` is a 4 billion parameter rectified flow transformer capable of generating images from text descriptions and supports multi-reference editing capabilities.
|
||||
For more information, please read our [blog post](https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence).
|
||||
|
||||
# **Key Features**
|
||||
|
||||
1. Our fastest distilled model for sub-second image generation.
|
||||
2. Best suited for interactive workflows, production deployments, and latency-critical applications.
|
||||
3. Text-to-image and image-to-image multi-reference editing in a single unified model.
|
||||
4. Runs on consumer GPUs (~13GB VRAM).
|
||||
5. Open weights available for commercial use under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
||||
# **Usage**
|
||||
|
||||
We provide a reference implementation of FLUX.2 [klein] 4B, as well as sampling code, in a dedicated [GitHub repository](https://github.com/black-forest-labs/flux2). Developers and creatives looking to build on top of FLUX.2 [klein] 4B are encouraged to use this as a starting point.
|
||||
|
||||
## **API Endpoints**
|
||||
|
||||
The FLUX.2 [klein] 4B model is available via the BFL API:
|
||||
|
||||
- [bfl.ai](https://bfl.ai)
|
||||
|
||||
FLUX.2 [klein] 4B is also available in both [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and [Diffusers](https://github.com/huggingface/diffusers).
|
||||
|
||||
## **Using with Diffusers 🧨**
|
||||
|
||||
To use FLUX.2 [klein] 4B with the 🧨 Diffusers python library, first install or upgrade diffusers:
|
||||
|
||||
```shell
|
||||
pip install -U diffusers
|
||||
```
|
||||
Then you can use Flux2KleinPipeline to run the model:
|
||||
|
||||
```python
|
||||
import torch
|
||||
from diffusers import Flux2KleinPipeline
|
||||
|
||||
device = "cuda"
|
||||
dtype = torch.bfloat16
|
||||
|
||||
pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4B", torch_dtype=dtype)
|
||||
pipe.enable_model_cpu_offload() # save some VRAM by offloading the model to CPU
|
||||
|
||||
prompt = "A cat holding a sign that says hello world"
|
||||
image = pipe(
|
||||
prompt,
|
||||
height=1024,
|
||||
width=1024,
|
||||
guidance_scale=4.0,
|
||||
num_inference_steps=4,
|
||||
generator=torch.Generator(device=device).manual_seed(0)
|
||||
).images[0]
|
||||
image.save("flux-klein.png")
|
||||
```
|
||||
|
||||
This repository holds an [FP8 version](https://huggingface.co/black-forest-labs/FLUX.2-klein-4b-fp8/blob/main/flux-2-klein-4b-fp8.safetensors) of FLUX.2 [klein] 4B. The main repository of this model (full BF16 weights) can be found [here](https://huggingface.co/black-forest-labs/FLUX.2-klein-4B).
|
||||
|
||||
---
|
||||
Limitations
|
||||
|
||||
Reference in New Issue
Block a user