diff --git a/README.md b/README.md
index c89119e..cb2cab3 100644
--- a/README.md
+++ b/README.md
@@ -40,7 +40,7 @@ HunyuanVideo-1.5 is a video generation model that delivers top-tier quality with
-
+
@@ -57,7 +57,7 @@ HunyuanVideo-1.5 is a video generation model that delivers top-tier quality with
## 🔥🔥🔥 News
-* 🚀 Nov 24, 2025: We now support cache inference, achieving approximately 2x speedup! Pull the latest code to try it.
+* 🚀 Nov 24, 2025: We now support cache inference, achieving approximately 2x speedup! Pull the latest code to try it. 🔥🔥🔥🆕
* 👋 Nov 20, 2025: We release the inference code and model weights of HunyuanVideo-1.5.
@@ -76,6 +76,9 @@ If you develop/use HunyuanVideo-1.5 in your projects, welcome to let us know.
- **LightX2V** - [LightX2V](https://github.com/ModelTC/LightX2V): A lightweight and efficient video generation framework that integrates HunyuanVideo-1.5, supporting multiple engineering acceleration techniques for fast inference.
+- **Wan2GP v9.62** - [Wan2GP](https://github.com/deepbeepmeep/Wan2GP): WanGP is a very low VRAM app (as low 6 GB of VRAM for Hunyuan Video 1.5) supports Lora Accelerator for a 8 steps generation and offers tools to facilitate Video Generation.
+
+
## 📑 Open-source Plan
- HunyuanVideo-1.5 (T2V/I2V)
- [x] Inference Code and checkpoints
@@ -400,12 +403,14 @@ We report the total inference time for 50 diffusion steps for HunyuanVideo 1.5 b
## 📚 Citation
```bibtex
-@misc{hunyuanvideo2025,
- title={HunyuanVideo 1.5 Technical Report},
+@misc{hunyuanvideo2025,
+ title={HunyuanVideo 1.5 Technical Report},
author={Tencent Hunyuan Foundation Model Team},
year={2025},
- publisher = {GitHub},
- howpublished = {\url{https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5}},
+ eprint={2511.18870},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV},
+ url={https://arxiv.org/abs/2511.18870},
}
```
diff --git a/README_CN.md b/README_CN.md
index 0527f46..88132c6 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -24,7 +24,7 @@ HunyuanVideo-1.5作为一款轻量级视频生成模型,仅需83亿参数即
-
+
@@ -40,7 +40,7 @@ HunyuanVideo-1.5作为一款轻量级视频生成模型,仅需83亿参数即
## 🔥🔥🔥 最新动态
-* 🚀 Nov 24, 2025: 我们现已支持 cache 推理,可实现约两倍加速!请 pull 最新代码体验。
+* 🚀 Nov 24, 2025: 我们现已支持 cache 推理,可实现约两倍加速!请 pull 最新代码体验。 🔥🔥🔥🆕
* 👋 Nov 20, 2025: 我们开源了 HunyuanVideo-1.5的代码和推理权重
## 🎥 演示视频
@@ -58,6 +58,9 @@ HunyuanVideo-1.5作为一款轻量级视频生成模型,仅需83亿参数即
- **LightX2V** - [LightX2V](https://github.com/ModelTC/LightX2V): 一个轻量级高效的视频生成框架,集成了 HunyuanVideo-1.5,支持多种工程加速技术以实现快速推理。
+- **Wan2GP v9.62** - [Wan2GP](https://github.com/deepbeepmeep/Wan2GP): Wan2GP 是一款对显存要求非常低的应用(在 Hunyuan Video 1.5 下最低仅需 6GB 显存),支持 Lora 加速器实现 8 步生成,并且提供多种视频生成辅助工具。
+
+
## 📑 开源计划
- HunyuanVideo-1.5 (文生视频/图生视频)
- [x] 推理代码和模型权重
@@ -381,12 +384,14 @@ GSB(Good/Same/Bad)评估法被广泛用于基于整体视频感知质量来
## 📚 引用
```bibtex
-@misc{hunyuanvideo2025,
- title={HunyuanVideo 1.5 Technical Report},
+@misc{hunyuanvideo2025,
+ title={HunyuanVideo 1.5 Technical Report},
author={Tencent Hunyuan Foundation Model Team},
year={2025},
- publisher = {GitHub},
- howpublished = {\url{https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5}},
+ eprint={2511.18870},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV},
+ url={https://arxiv.org/abs/2511.18870},
}
```