diff --git a/README.md b/README.md index ad87937..80dcf7c 100644 --- a/README.md +++ b/README.md @@ -1,48 +1,77 @@ --- +base_model: Qwen/Qwen-Image-Edit-2509 +base_model_relation: quantized +datasets: +- mit-han-lab/svdquant-datasets +frameworks: PyTorch +language: +- en license: Apache License 2.0 -tags: [] +tags: +- image-editing +- SVDQuant +- Qwen-Image-Edit-2509 +- Diffusion +- Quantization +- ICLR2025 +tasks: +- text-to-image-synthesis -#model-type: -##如 gpt、phi、llama、chatglm、baichuan 等 -#- gpt - -#domain: -##如 nlp、cv、audio、multi-modal -#- nlp - -#language: -##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa -#- cn - -#metrics: -##如 CIDEr、Blue、ROUGE 等 -#- CIDEr - -#tags: -##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他 -#- pretrained - -#tools: -##如 vllm、fastchat、llamacpp、AdaSeq 等 -#- vllm --- -### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。 -#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型 +
+
+
如果您是本模型的贡献者,我们邀请您根据模型贡献文档,及时完善模型卡片内容。
\ No newline at end of file + +This repository contains Nunchaku-quantized versions of [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509), an image-editing model based on [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image), advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance. + + + +No recent news. Stay tuned for updates! + +## Model Details + +### Model Description + +- **Developed by:** Nunchaku Team +- **Model type:** image-to-image +- **License:** apache-2.0 +- **Quantized from model:** [Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) + +### Model Files + +- [`svdq-int4_r32-qwen-image-edit.safetensors`](./svdq-int4_r32-qwen-image-edit.safetensors): SVDQuant INT4 (rank 32) Qwen-Image-Edit-2509 model. For users with non-Blackwell GPUs (pre-50-series). +- [`svdq-int4_r128-qwen-image-edit.safetensors`](./svdq-int4_r128-qwen-image-edit.safetensors): SVDQuant INT4 (rank 128) Qwen-Image-Edit-2509 model. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower. +- [`svdq-fp4_r32-qwen-image-edit.safetensors`](./svdq-fp4_r32-qwen-image-edit.safetensors): SVDQuant NVFP4 (rank 32) Qwen-Image-Edit-2509 model. For users with Blackwell GPUs (50-series). +- [`svdq-fp4_r128-qwen-image-edit.safetensors`](./svdq-fp4_r128-qwen-image-edit.safetensors): SVDQuant NVFP4 (rank 128) Qwen-Image-Edit-2509 model. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower. + + +### Model Sources + +- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku) +- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor) +- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007) +- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu) + +## Usage + +- Diffusers Usage: See [qwen-image-edit-2509.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-2509.py). Check this [tutorial](https://nunchaku.tech/docs/nunchaku/usage/qwen-image-edit.html) for more advanced usage. +- ComfyUI Usage: Will be released soon! + +## Performance + + + +## Citation + +```bibtex +@inproceedings{ + li2024svdquant, + title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models}, + author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song}, + booktitle={The Thirteenth International Conference on Learning Representations}, + year={2025} +} +``` \ No newline at end of file