diff --git a/README.md b/README.md
index 04420e2..deffcb5 100644
--- a/README.md
+++ b/README.md
@@ -1,28 +1,79 @@
+---
+license: other
+license_name: bilibili
+license_link: LICENSE
+---
+
+
---
---
+---
+
+## Usage:Long Text Translation and Summary(Index-1.9B-32K)
+- Clone the code repository for model execution and evaluation:
+```shell
+git clone https://github.com/bilibili/Index-1.9B
+cd Index-1.9B
+```
+- Download the model files to your local machine.
+
+- Use pip to install the required environment:
+
+```shell
+pip install -r requirements.txt
+```
+
+- Run the interactive tool for long text: **demo/cli_long_text_demo.py**
+- The model will, by default, read this file: data/user_long_text.txt and summarize the text in Chinese.
+- You can open a new window and modify the file content in real-time, and the model will read the updated file and summarize it.
+
+```shell
+cd demo/
+CUDA_VISIBLE_DEVICES=0 python cli_long_text_demo.py --model_path '/path/to/model/' --input_file_path data/user_long_text.txt
+```
+- Run & Interaction Example (Translation and summarization of the Bilibili financial report released on 2024.8.22 in English --- [Original English report here](https://github.com/bilibili/Index-1.9B/tree/main/demo/data/user_long_text.txt)):
+
+

+
Translation and Summary (Bilibili financial report released on 2024.8.22)
+
+
+
## Limitations and Disclaimer
Index-1.9B-32K may generate inaccurate, biased, or otherwise objectionable content in some cases. The model cannot understand or express personal opinions or value judgments when generating content, and its output does not represent the views or stance of the model developers. Therefore, please use the generated content with caution. Users are responsible for evaluating and verifying the generated content and should refrain from spreading harmful content. Before deploying any related applications, developers should conduct safety tests and fine-tune the model based on specific use cases.
diff --git a/README_zh.md b/README_zh.md
index ea653b1..294542d 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -1,30 +1,77 @@
+
-
- Index-1.9B-32K
-
-
-[Switch to English](https://modelscope.cn/models/IndexTeam/Index-1.9B-32K/file/view/master?fileName=README.md&status=1)
+
+ Index-1.9B-32K
+
+
+[Switch to English](https://huggingface.co/IndexTeam/Index-1.9B-32K/blob/main/README.md)
+
+
---
## 模型简介
-Index-1.9B-32K 是一个仅有 1.9B 参数、却具备 32K 上下文长度的语言模型(这意味着,这个超小精灵可以一次性读完 3.5 万字的文档)。该模型专门针对 32K 以上的长文本进行了持续预训练(Continue Pre-Train)和监督微调(SFT),主要基于我们精心清洗的长文本预训练语料、自建的长文本指令集进行训练。目前,我们已在 Hugging Face 和 ModelScope 上同步开源。
+Index-1.9B-32K 是一个仅有 1.9B 参数、却具备 32K 上下文长度的语言模型(这意味着,这个超小精灵可以一次性读完 3.5 万字以上的文档)。该模型专门针对 32K 以上的长文本进行了持续预训练(Continue Pre-Train)和监督微调(SFT),主要基于我们精心清洗的长文本预训练语料、自建的长文本指令集进行训练。目前,我们已在 Hugging Face 和 ModelScope 上同步开源。
-Index-1.9B-32K **以极小的模型体积(体积约为GPT-4等模型的2%)实现了出色的长文本处理能力**。以下为与 GPT-4、GPT-3.5-turbo-16k 的对比评测结果:
+Index-1.9B-32K 以极小的模型体积(约为 GPT-4 等模型的 2%),实现了出色的长文本处理能力。如下图,我们1.9B尺寸的模型分数甚至远超7B尺寸的模型。以下为与 GPT-4、Qwen2等模型的对比:
+
+
+
+Index-1.9B-32K与GPT-4、Qwen2等模型长文本能力对比
+Index-1.9B-32K在32K长度的大海捞针测试下,评测结果优异,如下图,评测结果只在(32K 长度,%10 深度)区域有一处黄斑(91.08分),其他范围表现优异,几乎全绿。
+
+

+
大海捞针评测
+
+
## Index-1.9B-32K模型下载、使用、技术报告:
Index-1.9B-32K模型下载、使用方法、技术报告详见:
-