Upload folder using ModelScope SDK

This commit is contained in:
Cherrytest
2025-04-16 18:02:59 +00:00
parent 906afac77f
commit e7462d0ac6
10 changed files with 119 additions and 39 deletions

135
README.md
View File

@ -1,47 +1,104 @@
--- ---
license: Apache License 2.0 license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
#model-type: language:
##如 gpt、phi、llama、chatglm、baichuan 等 - en
#- gpt library_name: diffusers
pipeline_tag: text-to-image
#domain: tags:
##如 nlp、cv、audio、multi-modal - Text-to-Image
#- nlp - ControlNet
- Diffusers
#language: - Flux.1-dev
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa - image-generation
#- cn - Stable Diffusion
base_model: black-forest-labs/FLUX.1-dev
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
--- ---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载 # FLUX.1-dev-ControlNet-Union-Pro-2.0
```bash
#安装ModelScope This repository contains an unified ControlNet for FLUX.1-dev model released by [Shakker Labs](https://huggingface.co/Shakker-Labs). We provide an [online demo](https://huggingface.co/spaces/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0).
pip install modelscope
``` # Keynotes
In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro),
- Remove mode embedding, has smaller model size.
- Improve on canny and pose, better control and aesthetics.
- Add support for soft edge. Remove support for tile.
# Model Cards
- This ControlNet consists of 6 double blocks and 0 single block. Mode embedding is removed.
- We train the model from scratch for 300k steps using a dataset of 20M high-quality general and human images. We train at 512x512 resolution in BFloat16, batch size = 128, learning rate = 2e-5, the guidance is uniformly sampled from [1, 7]. We set the text drop ratio to 0.20.
- This model supports multiple control modes, including canny, soft edge, depth, pose, gray. You can use it just as a normal ControlNet.
- This model can be jointly used with other ControlNets.
# Showcases
<table>
<tr>
<td><img src="./images/canny.png" alt="canny" style="height:100%"></td>
</tr>
<tr>
<td><img src="./images/softedge.png" alt="softedge" style="height:100%"></td>
</tr>
<tr>
<td><img src="./images/pose.png" alt="pose" style="height:100%"></td>
</tr>
<tr>
<td><img src="./images/depth.png" alt="depth" style="height:100%"></td>
</tr>
<tr>
<td><img src="./images/gray.png" alt="gray" style="height:100%"></td>
</tr>
</table>
# Inference
```python ```python
#SDK模型下载 import torch
from modelscope import snapshot_download from diffusers.utils import load_image
model_dir = snapshot_download('LiblibAI/FLUX.1-dev-ControlNet-Union-Pro-2.0') from diffusers import FluxControlNetPipeline, FluxControlNetModel
```
Git下载 base_model = 'black-forest-labs/FLUX.1-dev'
``` controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0'
#Git模型下载
git clone https://www.modelscope.cn/LiblibAI/FLUX.1-dev-ControlNet-Union-Pro-2.0.git controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
pipe.to("cuda")
# replace with other conds
control_image = load_image("./conds/canny.png")
width, height = control_image.size
prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing hair gently tousled by the sea breeze. She wears a soft, pastel-colored dress that complements the tranquil blues and greens of the coastal scenery. The golden hues of the setting sun cast a warm glow on her face, highlighting her serene expression. The background features a vast, azure ocean with gentle waves lapping at the shore, surrounded by distant cliffs and a clear, cloudless sky. The composition emphasizes the girl's serene presence amidst the natural beauty, with a balanced blend of warm and cool tones."
image = pipe(
prompt,
control_image=control_image,
width=width,
height=height,
controlnet_conditioning_scale=0.7,
control_guidance_end=0.8,
num_inference_steps=30,
guidance_scale=3.5,
generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]
``` ```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p> # Recommended Parameters
You can adjust controlnet_conditioning_scale and control_guidance_end for stronger control and better detail preservation. For better stability, we suggest to use multi-conditions.
- Canny: use cv2.Canny, controlnet_conditioning_scale=0.7, control_guidance_end=0.8.
- Soft Edge: use [AnylineDetector](https://github.com/huggingface/controlnet_aux), controlnet_conditioning_scale=0.7, control_guidance_end=0.8.
- Depth: use [depth-anything](https://github.com/DepthAnything/Depth-Anything-V2), controlnet_conditioning_scale=0.8, control_guidance_end=0.8.
- Pose: use [DWPose](https://github.com/IDEA-Research/DWPose/tree/onnx), controlnet_conditioning_scale=0.9, control_guidance_end=0.65.
- Gray: use cv2.cvtColor, controlnet_conditioning_scale=0.9, control_guidance_end=0.8.
# Resources
- [InstantX/FLUX.1-dev-IP-Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter)
- [InstantX/FLUX.1-dev-Controlnet-Canny](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Depth](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth)
- [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro)
# Acknowledgements
This model is developed by [Shakker Labs](https://huggingface.co/Shakker-Labs). The original idea is inspired by [xinsir/controlnet-union-sdxl-1.0](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0). All copyright reserved.

BIN
conds/canny.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

19
config.json Normal file
View File

@ -0,0 +1,19 @@
{
"_class_name": "FluxControlNetModel",
"_diffusers_version": "0.31.0.dev0",
"attention_head_dim": 128,
"axes_dims_rope": [
16,
56,
56
],
"guidance_embeds": true,
"in_channels": 64,
"joint_attention_dim": 4096,
"num_attention_heads": 24,
"num_layers": 6,
"num_mode": null,
"num_single_layers": 0,
"patch_size": 1,
"pooled_projection_dim": 768
}

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-to-image", "allow_remote": true}

BIN
diffusion_pytorch_model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
images/canny.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 428 KiB

BIN
images/depth.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 712 KiB

BIN
images/gray.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 883 KiB

BIN
images/pose.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 526 KiB

BIN
images/softedge.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 716 KiB