Upload ./README.md to ModelScope hub

This commit is contained in:
Lmxyy1999
2025-11-16 02:24:23 +00:00
parent 5b22c46da5
commit e7dab66452

View File

@ -50,13 +50,12 @@ This repository contains Nunchaku-quantized versions of [FLUX.1-Kontext-dev](htt
- [`svdq-int4_r32-flux.1-kontext-dev.safetensors`](./svdq-int4_r32-flux.1-kontext-dev.safetensors): SVDQuant quantized INT4 FLUX.1-Kontext-dev model. For users with non-Blackwell GPUs (pre-50-series). - [`svdq-int4_r32-flux.1-kontext-dev.safetensors`](./svdq-int4_r32-flux.1-kontext-dev.safetensors): SVDQuant quantized INT4 FLUX.1-Kontext-dev model. For users with non-Blackwell GPUs (pre-50-series).
- [`svdq-fp4_r32-flux.1-kontext-dev.safetensors`](./svdq-fp4_r32-flux.1-kontext-dev.safetensors): SVDQuant quantized NVFP4 FLUX.1-Kontext-dev model. For users with Blackwell GPUs (50-series). - [`svdq-fp4_r32-flux.1-kontext-dev.safetensors`](./svdq-fp4_r32-flux.1-kontext-dev.safetensors): SVDQuant quantized NVFP4 FLUX.1-Kontext-dev model. For users with Blackwell GPUs (50-series).
### Model Sources ### Model Sources
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku) - **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor) - **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007) - **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu) - **Demo:** [demo.nunchaku.tech](https://demo.nunchaku.tech)
## Usage ## Usage