Update README.md

This commit is contained in:
ai-modelscope
2025-06-06 00:21:49 +08:00
parent d3e5e7dfca
commit 827c1c384a

View File

@ -11,11 +11,11 @@ library_name: transformers
<h3 align="center">
<b>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
Unlocking the Reasoning Potential of Language Model<br/>From Pretraining to Posttraining
<br/>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
</b>
</h3>
@ -35,7 +35,41 @@ library_name: transformers
<br/>
> This model repository is licensed under the MIT License.
---
## Updates
[2025.05.30] We scaled the SFT dataset from approximately 500K to 6M instances and continuously expanding the RL training window size from 32K to 48K, the performance of [MiMo-7B-RL-0530](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL-0530) on AIME24 can be continuously improved and eventually surpass that of DeepSeek R1 (79.8).
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>MiMo-7B-RL</th>
<th>MiMo-7B-RL-0530</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="3"><strong>Mathematics</strong></td>
<p align="center">
<td rowspan="11"><img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/length.jpg?raw=true"></td>
</p>
</tr>
<tr><td>MATH500<br/>(Pass@1)</td><td>95.8</td><td>97.2</td></tr>
<tr><td>AIME 2024<br/>(Pass@1)</td><td>68.2</td><td>80.1</td></tr>
<tr><td>AIME 2025<br/>(Pass@1)</td><td>55.4</td><td>70.2</td></tr>
<tr><td colspan="3"><strong>Code</strong></td></tr>
<tr><td>LiveCodeBench v5<br/>(Pass@1)</td><td>57.8</td><td>60.9</td></tr>
<tr><td>LiveCodeBench v6<br/>(Pass@1)</td><td>49.3</td><td>52.2</td></tr>
<tr><td colspan="3"><strong>STEM</strong></td></tr>
<tr><td>GPQA-Diamond<br/>(Pass@1)</td><td>54.4</td><td>60.6</td></tr>
<tr><td colspan="3"><strong>General</strong></td></tr>
<tr><td>Alignbench1.1<br/>(Evaluated by GPT4.1)</td><td>6.9</td><td>7.4</td></tr>
</tbody>
</table>
---
## I. Introduction
@ -122,7 +156,7 @@ MiMo-7B series
### SGLang Inference
Thanks to the [contribution](https://github.com/sgl-project/sglang/pull/5921) from the SGLang team, we supported MiMo in SGLang mainstream within 24h with MTP coming soon.
Thanks to the [MiMo model support](https://github.com/sgl-project/sglang/pull/5921) and [MTP](https://github.com/sgl-project/sglang/pull/6059) from the SGLang team, we supported MiMo in SGLang mainstream.
Example Script
@ -132,9 +166,14 @@ python3 -m uv pip install "sglang[all] @ git+https://github.com/sgl-project/sgla
# Launch SGLang Server
python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-RL --host 0.0.0.0 --trust-remote-code
# Launch MTP Server
python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-RL --trust-remote-code \
--speculative-algorithm EAGLE --speculative-num-steps 1 --speculative-eagle-topk 1 \
--speculative-num-draft-tokens 2 --mem-fraction 0.5
```
Detailed usage can be found in [SGLang documents](https://docs.sglang.ai/backend/send_request.html). MTP will also be supported in 24h.
Detailed usage can be found in [SGLang documents](https://docs.sglang.ai/backend/send_request.html).
### vLLM inference
@ -223,7 +262,7 @@ print(tokenizer.decode(output.tolist()[0]))
```bibtex
@misc{coreteam2025mimounlockingreasoningpotential,
title={MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining},
author={{Xiaomi LLM-Core Team}},
author={LLM-Core-Team Xiaomi},
year={2025},
eprint={2505.07608},
archivePrefix={arXiv},