Update the model name in USAGE_POLICY (#52)

- Update USAGE_POLICY (ccef4ca7b48a5797b0b436e77f5cd0c643449942)

Co-authored-by: B. <Enes@users.noreply.huggingface.co>
This commit is contained in:
ai-modelscope
2025-08-07 06:46:22 +08:00
parent bfbcca03ca
commit d62b81784f
4 changed files with 162 additions and 164 deletions

View File

@ -13,7 +13,7 @@ tags:
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>System card</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
@ -21,8 +21,8 @@ tags:
Welcome to the gpt-oss series, [OpenAIs open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
Were releasing two flavors of the open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters)
Were releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.