mirror of
https://www.modelscope.cn/showlab/OmniConsistency.git
synced 2026-04-02 20:52:53 +08:00
Upload folder using ModelScope SDK
This commit is contained in:
BIN
LoRAs/3D_Chibi_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/3D_Chibi_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/American_Cartoon_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/American_Cartoon_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Chinese_Ink_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Chinese_Ink_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Clay_Toy_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Clay_Toy_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Fabric_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Fabric_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Ghibli_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Ghibli_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Irasutoya_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Irasutoya_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Jojo_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Jojo_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/LEGO_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/LEGO_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Line_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Line_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Macaron_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Macaron_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Oil_Painting_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Oil_Painting_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Origami_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Origami_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Paper_Cutting_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Paper_Cutting_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Picasso_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Picasso_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Pixel_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Pixel_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Poly_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Poly_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Pop_Art_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Pop_Art_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Rick_Morty_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Rick_Morty_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Snoopy_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Snoopy_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Van_Gogh_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Van_Gogh_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
LoRAs/Vector_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
BIN
LoRAs/Vector_rank128_bf16.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
OmniConsistency.safetensors
(Stored with Git LFS)
Normal file
BIN
OmniConsistency.safetensors
(Stored with Git LFS)
Normal file
Binary file not shown.
205
README.md
205
README.md
@ -1,47 +1,170 @@
|
||||
---
|
||||
license: Apache License 2.0
|
||||
|
||||
#model-type:
|
||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
||||
#- gpt
|
||||
|
||||
#domain:
|
||||
##如 nlp、cv、audio、multi-modal
|
||||
#- nlp
|
||||
|
||||
#language:
|
||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
||||
#- cn
|
||||
|
||||
#metrics:
|
||||
##如 CIDEr、Blue、ROUGE 等
|
||||
#- CIDEr
|
||||
|
||||
#tags:
|
||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
||||
#- pretrained
|
||||
|
||||
#tools:
|
||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
||||
#- vllm
|
||||
base_model:
|
||||
- black-forest-labs/FLUX.1-dev
|
||||
license: mit
|
||||
pipeline_tag: image-to-image
|
||||
title: OmniConsistency
|
||||
emoji: 🚀
|
||||
colorFrom: gray
|
||||
colorTo: pink
|
||||
sdk: gradio
|
||||
sdk_version: 5.31.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
short_description: Generate styled image from reference image and external LoRA
|
||||
---
|
||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
||||
|
||||
SDK下载
|
||||
**OmniConsistency: Learning Style-Agnostic
|
||||
Consistency from Paired Stylization Data**
|
||||
<br>
|
||||
[Yiren Song](https://scholar.google.com.hk/citations?user=L2YS0jgAAAAJ),
|
||||
[Cheng Liu](https://scholar.google.com.hk/citations?hl=zh-CN&user=TvdVuAYAAAAJ),
|
||||
and
|
||||
[Mike Zheng Shou](https://sites.google.com/view/showlab)
|
||||
<br>
|
||||
[Show Lab](https://sites.google.com/view/showlab), National University of Singapore
|
||||
<br>
|
||||
|
||||
[[Official Code]](https://github.com/showlab/OmniConsistency)
|
||||
[[Paper]](https://huggingface.co/papers/2505.18445)
|
||||
[[Dataset]](https://huggingface.co/datasets/showlab/OmniConsistency)
|
||||
<img src='./figure/teaser.png' width='100%' />
|
||||
|
||||
## Installation
|
||||
|
||||
We recommend using Python 3.10 and PyTorch with CUDA support. To set up the environment:
|
||||
|
||||
```bash
|
||||
#安装ModelScope
|
||||
pip install modelscope
|
||||
```
|
||||
```python
|
||||
#SDK模型下载
|
||||
from modelscope import snapshot_download
|
||||
model_dir = snapshot_download('showlab/OmniConsistency')
|
||||
```
|
||||
Git下载
|
||||
```
|
||||
#Git模型下载
|
||||
git clone https://www.modelscope.cn/showlab/OmniConsistency.git
|
||||
# Create a new conda environment
|
||||
conda create -n omniconsistency python=3.10
|
||||
conda activate omniconsistency
|
||||
|
||||
# Install other dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
||||
## Download
|
||||
|
||||
You can download the OmniConsistency model and pretrained LoRAs directly from [Hugging Face](https://huggingface.co/showlab/OmniConsistency).
|
||||
Or download using Python script:
|
||||
|
||||
### OmniConsistency Model
|
||||
|
||||
```python
|
||||
from huggingface_hub import hf_hub_download
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/3D_Chibi_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/American_Cartoon_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Chinese_Ink_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Clay_Toy_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Fabric_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Ghibli_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Irasutoya_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Jojo_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/LEGO_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Line_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Macaron_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Oil_Painting_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Origami_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Paper_Cutting_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Picasso_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Pixel_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Poly_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Pop_Art_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Rick_Morty_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Snoopy_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Van_Gogh_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="LoRAs/Vector_rank128_bf16.safetensors", local_dir="./LoRAs")
|
||||
```
|
||||
### Pretrained LoRAs
|
||||
```python
|
||||
from huggingface_hub import hf_hub_download
|
||||
hf_hub_download(repo_id="showlab/OmniConsistency", filename="OmniConsistency.safetensors", local_dir="./Model")
|
||||
```
|
||||
|
||||
## Usage
|
||||
Here's a basic example of using OmniConsistency:
|
||||
|
||||
### Model Initialization
|
||||
```python
|
||||
import time
|
||||
import torch
|
||||
from PIL import Image
|
||||
from src_inference.pipeline import FluxPipeline
|
||||
from src_inference.lora_helper import set_single_lora
|
||||
|
||||
def clear_cache(transformer):
|
||||
for name, attn_processor in transformer.attn_processors.items():
|
||||
attn_processor.bank_kv.clear()
|
||||
|
||||
# Initialize model
|
||||
device = "cuda"
|
||||
base_path = "/path/to/black-forest-labs/FLUX.1-dev"
|
||||
pipe = FluxPipeline.from_pretrained(base_path, torch_dtype=torch.bfloat16).to("cuda")
|
||||
|
||||
# Load OmniConsistency model
|
||||
set_single_lora(pipe.transformer,
|
||||
"/path/to/OmniConsistency.safetensors",
|
||||
lora_weights=[1], cond_size=512)
|
||||
|
||||
# Load external LoRA
|
||||
pipe.unload_lora_weights()
|
||||
pipe.load_lora_weights("/path/to/lora_folder",
|
||||
weight_name="lora_name.safetensors")
|
||||
```
|
||||
|
||||
### Style Inference
|
||||
```python
|
||||
image_path1 = "figure/test.png"
|
||||
prompt = "3D Chibi style, Three individuals standing together in the office."
|
||||
|
||||
subject_images = []
|
||||
spatial_image = [Image.open(image_path1).convert("RGB")]
|
||||
|
||||
width, height = 1024, 1024
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
image = pipe(
|
||||
prompt,
|
||||
height=height,
|
||||
width=width,
|
||||
guidance_scale=3.5,
|
||||
num_inference_steps=25,
|
||||
max_sequence_length=512,
|
||||
generator=torch.Generator("cpu").manual_seed(5),
|
||||
spatial_images=spatial_image,
|
||||
subject_images=subject_images,
|
||||
cond_size=512,
|
||||
).images[0]
|
||||
|
||||
end_time = time.time()
|
||||
elapsed_time = end_time - start_time
|
||||
print(f"code running time: {elapsed_time} s")
|
||||
|
||||
# Clear cache after generation
|
||||
clear_cache(pipe.transformer)
|
||||
|
||||
image.save("results/output.png")
|
||||
```
|
||||
|
||||
## Datasets
|
||||
Our datasets have been uploaded to the [Hugging Face](https://huggingface.co/datasets/showlab/OmniConsistency). and is available for direct use via the datasets library.
|
||||
|
||||
You can easily load any of the 22 style subsets like this:
|
||||
```python
|
||||
from datasets import load_dataset
|
||||
|
||||
# Load a single style (e.g., Ghibli)
|
||||
ds = load_dataset("showlab/OmniConsistency", split="Ghibli")
|
||||
print(ds[0])
|
||||
```
|
||||
|
||||
## Citation
|
||||
```
|
||||
@inproceedings{Song2025OmniConsistencyLS,
|
||||
title={OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data},
|
||||
author={Yiren Song and Cheng Liu and Mike Zheng Shou},
|
||||
year={2025},
|
||||
url={https://api.semanticscholar.org/CorpusID:278905729}
|
||||
}
|
||||
```
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "image-to-image", "allow_remote": true}
|
||||
BIN
figure/teaser.png
Normal file
BIN
figure/teaser.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.3 MiB |
BIN
figure/test.png
Normal file
BIN
figure/test.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 100 KiB |
Reference in New Issue
Block a user