Upload ./README.md to ModelScope hub

This commit is contained in:
Lmxyy1999
2025-08-15 08:24:49 +00:00
parent 4316034047
commit 8309d737bf

108
README.md
View File

@ -1,47 +1,73 @@
---
base_model: Qwen/Qwen-Image
base_model_relation: quantized
datasets:
- mit-han-lab/svdquant-datasets
frameworks: PyTorch
language:
- en
license: Apache License 2.0
tags:
- text-to-image
- SVDQuant
- Qwen-Image
- Diffusion
- Quantization
- ICLR2025
tasks:
- text-to-image-synthesis
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
<p align="center" style="border-radius: 10px">
<img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku.svg" width="30%" alt="Nunchaku Logo"/>
</p>
SDK下载
```bash
#安装ModelScope
pip install modelscope
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('nunchaku-tech/nunchaku-qwen-image')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/nunchaku-tech/nunchaku-qwen-image.git
```
# Model Card for nunchaku-qwen-image
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
This repository contains Nunchaku-quantized versions of [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image), designed to generate high-quality images from text prompts, advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance.
## Model Details
### Model Description
- **Developed by:** Nunchaku Team
- **Model type:** text-to-image
- **License:** apache-2.0
- **Quantized from model:** [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image)
### Model Files
- [`svdq-int4_r32-qwen-image.safetensors`](./svdq-int4_r32-qwen-image.safetensors): SVDQuant quantized INT4 Qwen-Image model with rank 32. For users with non-Blackwell GPUs (pre-50-series).
- [`svdq-int4_r128-qwen-image.safetensors`](./svdq-int4_r128-qwen-image.safetensors): SVDQuant quantized INT4 Qwen-Image model with rank 128. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower.
- [`svdq-fp4_r32-qwen-image.safetensors`](./svdq-fp4_r32-qwen-image.safetensors): SVDQuant quantized NVFP4 Qwen-Image model with rank 32. For users with Blackwell GPUs (50-series).
- [`svdq-fp4_r128-qwen-image.safetensors`](./svdq-fp4_r128-qwen-image.safetensors): SVDQuant quantized NVFP4 Qwen-Image model with rank 128. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower.
### Model Sources
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)
## Usage
- Diffusers Usage: See [qwen-image.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image.py).
- ComfyUI Usage: Coming soon!
## Performance
![performance](https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/efficiency.jpg)
## Citation
```bibtex
@inproceedings{
li2024svdquant,
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
```