From cd8ed34e759713ec72510b61d3a99258ec7a8492 Mon Sep 17 00:00:00 2001 From: Cherrytest Date: Fri, 15 Dec 2023 03:16:18 +0000 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4309935..7814c79 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ widgets: 此外,**Image-to-Video**的许多设计理念和设计细节(比如核心的UNet部分)都继承于我们已经公开的工作**VideoComposer**,您可以参考我们的[VideoComposer](https://videocomposer.github.io)和本项目[ModelScope](https://github.com/modelscope/modelscope)的了解详细细节。 -The **Image-to-Video** project aims to address the task of HD video generation based on input images. **Image-to-Video** is one of the HQ video generation base models developed by DAMO Academy. Its core components consist of two stages, each addressing the issues of semantic consistency and video quality. The total number of parameters is approximately 3.7 billion. The model has been pre-trained on a large-scale mixture of video and image data and fine-tuned on a small amount of high-quality data. This data distribution is extensive and diverse, and the model demonstrates good generalization to different types of data. Compared to existing video generation models, the **I2VGen-XL** project has significant advantages in terms of quality, texture, semantics, and temporal continuity. +The **Image-to-Video** project aims to address the task of HD video generation based on input images. **Image-to-Video** is one of the HQ video generation base models developed by DAMO Academy. Its core components consist of two stages, each addressing the issues of semantic consistency and video quality. The total number of parameters is approximately 3.7 billion. The model has been pre-trained on a large-scale mixture of video and image data and fine-tuned on a small amount of high-quality data. This data distribution is extensive and diverse, and the model demonstrates good generalization to different types of data. Compared to existing video generation models, the **Image-to-Video** project has significant advantages in terms of quality, texture, semantics, and temporal continuity. Additionally, many design concepts and details of **Image-to-Video** (such as the core UNet) are inherited from our publicly available work, **VideoComposer**. For detailed information, please refer to our [VideoComposer](https://videocomposer.github.io) and the Github code repository for this [ModelScope](https://github.com/modelscope/modelscope) project.