mirror of
https://www.modelscope.cn/alimama-creative/FLUX.1-Turbo-Alpha.git
synced 2026-04-02 21:42:53 +08:00
update readme
This commit is contained in:
119
README.md
119
README.md
@ -1,47 +1,82 @@
|
||||
---
|
||||
license: Apache License 2.0
|
||||
|
||||
#model-type:
|
||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
||||
#- gpt
|
||||
|
||||
#domain:
|
||||
##如 nlp、cv、audio、multi-modal
|
||||
#- nlp
|
||||
|
||||
#language:
|
||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
||||
#- cn
|
||||
|
||||
#metrics:
|
||||
##如 CIDEr、Blue、ROUGE 等
|
||||
#- CIDEr
|
||||
|
||||
#tags:
|
||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
||||
#- pretrained
|
||||
|
||||
#tools:
|
||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
||||
#- vllm
|
||||
license: other
|
||||
license_name: flux-1-dev-non-commercial-license
|
||||
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
|
||||
language:
|
||||
- en
|
||||
base_model: black-forest-labs/FLUX.1-dev
|
||||
library_name: diffusers
|
||||
tags:
|
||||
- Text-to-Image
|
||||
- FLUX
|
||||
- Stable Diffusion
|
||||
pipeline_tag: text-to-image
|
||||
---
|
||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
||||
|
||||
SDK下载
|
||||
```bash
|
||||
#安装ModelScope
|
||||
pip install modelscope
|
||||
```
|
||||
```python
|
||||
#SDK模型下载
|
||||
from modelscope import snapshot_download
|
||||
model_dir = snapshot_download('alimama-creative/FLUX.1-Turbo-Alpha')
|
||||
```
|
||||
Git下载
|
||||
```
|
||||
#Git模型下载
|
||||
git clone https://www.modelscope.cn/alimama-creative/FLUX.1-Turbo-Alpha.git
|
||||
<div style="display: flex; justify-content: center; align-items: center;">
|
||||
<img src="./images/images_alibaba.png" alt="alibaba" style="width: 20%; height: auto; margin-right: 5%;">
|
||||
<img src="./images/images_alimama.png" alt="alimama" style="width: 20%; height: auto;">
|
||||
</div>
|
||||
|
||||
[中文版Readme](./README_ZH.md)
|
||||
|
||||
This repository provides a 8-step distilled lora for [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) model released by AlimamaCreative Team.
|
||||
|
||||
# Description
|
||||
This checkpoint is a 8-step distilled Lora, trained based on FLUX.1-dev model. We use a multi-head discriminator to improve the distill quality. Our model can be used for T2I, inpainting controlnet and other FLUX related models. The recommended guidance_scale=3.5 and lora_scale=1. Our Lower steps version will release later.
|
||||
|
||||
- Text-to-Image.
|
||||
|
||||

|
||||
|
||||
- With [alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta](https://huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta). Our distilled lora can be well adapted to the Inpainting controlnet, and the accelerated generated effect can follow the original output well.
|
||||
|
||||

|
||||
|
||||
# How to use
|
||||
## diffusers
|
||||
This model can be used ditrectly with diffusers
|
||||
|
||||
```json
|
||||
import torch
|
||||
from diffusers.pipelines import FluxPipeline
|
||||
|
||||
model_id = "black-forest-labs/FLUX.1-dev"
|
||||
adapter_id = "alimama-creative/FLUX.1-Turbo-Alpha"
|
||||
|
||||
pipe = FluxPipeline.from_pretrained(
|
||||
model_id,
|
||||
torch_dtype=torch.bfloat16
|
||||
)
|
||||
pipe.to("cuda")
|
||||
|
||||
pipe.load_lora_weights(adapter_id)
|
||||
pipe.fuse_lora()
|
||||
|
||||
prompt = "A DSLR photo of a shiny VW van that has a cityscape painted on it. A smiling sloth stands on grass in front of the van and is wearing a leather jacket, a cowboy hat, a kilt and a bowtie. The sloth is holding a quarterstaff and a big book."
|
||||
image = pipe(
|
||||
prompt=prompt,
|
||||
guidance_scale=3.5,
|
||||
height=1024,
|
||||
width=1024,
|
||||
num_inference_steps=8,
|
||||
max_sequence_length=512).images[0]
|
||||
```
|
||||
|
||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
||||
## comfyui
|
||||
|
||||
- T2I turbo workflow: [click here](./workflows/t2I_flux_turbo.json)
|
||||
- Inpainting controlnet turbo workflow: [click here](./workflows/alimama_flux_inpainting_turbo_8step.json)
|
||||
|
||||
|
||||
# Training Details
|
||||
|
||||
The model is trained on 1M open source and internal sources images, with the aesthetic 6.3+ and resolution greater than 800. We use adversarial training to improve the quality. Our method fix the original FLUX.1-dev transformer as the discriminator backbone, and add multi heads to every transformer layer. We fix the guidance scale as 3.5 during training, and use the time shift as 3.
|
||||
|
||||
Mixed precision: bf16
|
||||
|
||||
Learning rate: 2e-5
|
||||
|
||||
Batch size: 64
|
||||
|
||||
Image size: 1024x1024
|
||||
Reference in New Issue
Block a user