mirror of
https://www.modelscope.cn/FunAudioLLM/Fun-CosyVoice3-0.5B-2512.git
synced 2026-04-02 23:12:53 +08:00
update
This commit is contained in:
139
README.md
139
README.md
@ -108,144 +108,5 @@
|
|||||||
sudo yum install sox sox-devel
|
sudo yum install sox sox-devel
|
||||||
```
|
```
|
||||||
|
|
||||||
### Model download
|
|
||||||
|
|
||||||
We strongly recommend that you download our pretrained `Fun-CosyVoice3-0.5B` `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
|
|
||||||
|
|
||||||
``` python
|
|
||||||
# SDK模型下载
|
|
||||||
from modelscope import snapshot_download
|
|
||||||
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
|
|
||||||
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
|
|
||||||
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
|
|
||||||
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
|
|
||||||
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
|
|
||||||
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
|
|
||||||
```
|
|
||||||
|
|
||||||
Optionally, you can unzip `ttsfrd` resource and install `ttsfrd` package for better text normalization performance.
|
|
||||||
|
|
||||||
Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use wetext by default.
|
|
||||||
|
|
||||||
``` sh
|
|
||||||
cd pretrained_models/CosyVoice-ttsfrd/
|
|
||||||
unzip resource.zip -d .
|
|
||||||
pip install ttsfrd_dependency-0.1-py3-none-any.whl
|
|
||||||
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
|
|
||||||
```
|
|
||||||
|
|
||||||
### Basic Usage
|
|
||||||
|
|
||||||
We strongly recommend using `Fun-CosyVoice3-0.5B` for better performance.
|
|
||||||
Follow the code in `example.py` for detailed usage of each model.
|
|
||||||
```sh
|
|
||||||
python example.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### CosyVoice2 vllm Usage
|
|
||||||
If you want to use vllm for inference, please install `vllm==v0.9.0`. Older vllm version do not support CosyVoice2 inference.
|
|
||||||
|
|
||||||
Notice that `vllm==v0.9.0` has a lot of specific requirements, for example `torch==2.7.0`. You can create a new env to in case your hardward do not support vllm and old env is corrupted.
|
|
||||||
|
|
||||||
``` sh
|
|
||||||
conda create -n cosyvoice_vllm --clone cosyvoice
|
|
||||||
conda activate cosyvoice_vllm
|
|
||||||
pip install vllm==v0.9.0 transformers==4.51.3 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
|
||||||
python vllm_example.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Start web demo
|
|
||||||
|
|
||||||
You can use our web demo page to get familiar with CosyVoice quickly.
|
|
||||||
|
|
||||||
Please see the demo website for details.
|
|
||||||
|
|
||||||
``` python
|
|
||||||
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
|
|
||||||
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Advanced Usage
|
|
||||||
|
|
||||||
For advanced users, we have provided training and inference scripts in `examples/libritts/cosyvoice/run.sh`.
|
|
||||||
|
|
||||||
#### Build for deployment
|
|
||||||
|
|
||||||
Optionally, if you want service deployment,
|
|
||||||
You can run the following steps.
|
|
||||||
|
|
||||||
``` sh
|
|
||||||
cd runtime/python
|
|
||||||
docker build -t cosyvoice:v1.0 .
|
|
||||||
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
|
|
||||||
# for grpc usage
|
|
||||||
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
|
|
||||||
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
|
||||||
# for fastapi usage
|
|
||||||
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
|
|
||||||
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Using Nvidia TensorRT-LLM for deployment
|
|
||||||
|
|
||||||
Using TensorRT-LLM to accelerate cosyvoice2 llm could give 4x acceleration comparing with huggingface transformers implementation.
|
|
||||||
To quick start:
|
|
||||||
|
|
||||||
``` sh
|
|
||||||
cd runtime/triton_trtllm
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
For more details, you could check [here](https://github.com/FunAudioLLM/CosyVoice/tree/main/runtime/triton_trtllm)
|
|
||||||
|
|
||||||
## Discussion & Communication
|
|
||||||
|
|
||||||
You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
|
|
||||||
|
|
||||||
You can also scan the QR code to join our official Dingding chat group.
|
|
||||||
|
|
||||||
<img src="./asset/dingding.png" width="250px">
|
|
||||||
|
|
||||||
## Acknowledge
|
|
||||||
|
|
||||||
1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
|
|
||||||
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
|
|
||||||
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
|
|
||||||
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
|
|
||||||
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
|
|
||||||
|
|
||||||
## Citations
|
|
||||||
|
|
||||||
``` bibtex
|
|
||||||
@article{du2024cosyvoice,
|
|
||||||
title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
|
|
||||||
author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
|
|
||||||
journal={arXiv preprint arXiv:2407.05407},
|
|
||||||
year={2024}
|
|
||||||
}
|
|
||||||
|
|
||||||
@article{du2024cosyvoice,
|
|
||||||
title={Cosyvoice 2: Scalable streaming speech synthesis with large language models},
|
|
||||||
author={Du, Zhihao and Wang, Yuxuan and Chen, Qian and Shi, Xian and Lv, Xiang and Zhao, Tianyu and Gao, Zhifu and Yang, Yexin and Gao, Changfeng and Wang, Hui and others},
|
|
||||||
journal={arXiv preprint arXiv:2412.10117},
|
|
||||||
year={2024}
|
|
||||||
}
|
|
||||||
|
|
||||||
@article{du2025cosyvoice,
|
|
||||||
title={CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training},
|
|
||||||
author={Du, Zhihao and Gao, Changfeng and Wang, Yuxuan and Yu, Fan and Zhao, Tianyu and Wang, Hao and Lv, Xiang and Wang, Hui and Shi, Xian and An, Keyu and others},
|
|
||||||
journal={arXiv preprint arXiv:2505.17589},
|
|
||||||
year={2025}
|
|
||||||
}
|
|
||||||
|
|
||||||
@inproceedings{lyu2025build,
|
|
||||||
title={Build LLM-Based Zero-Shot Streaming TTS System with Cosyvoice},
|
|
||||||
author={Lyu, Xiang and Wang, Yuxuan and Zhao, Tianyu and Wang, Hao and Liu, Huadai and Du, Zhihao},
|
|
||||||
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
|
||||||
pages={1--2},
|
|
||||||
year={2025},
|
|
||||||
organization={IEEE}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
|
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
|
||||||
|
|||||||
Reference in New Issue
Block a user