Update chat_template.jinja (#85)

- Update chat_template.jinja (41876e41628351e3e23273661c25adf2114bf8af)

Co-authored-by: Quentin Gallouédec <qgallouedec@users.noreply.huggingface.co>
This commit is contained in:
ai-modelscope
2025-08-09 08:11:54 +08:00
parent 71fc9e89c7
commit 20a1d1f9c3
2 changed files with 3 additions and 3 deletions

View File

@ -22,7 +22,7 @@ tags:
Welcome to the gpt-oss series, [OpenAIs open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
Were releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
@ -38,7 +38,7 @@ Both models were trained on our [harmony response format](https://github.com/ope
* **Full chain-of-thought:** Gain complete access to the models reasoning process, facilitating easier debugging and increased trust in outputs. Its not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
---

View File

@ -245,9 +245,9 @@
{%- if developer_message %}
{{- "# Instructions\n\n" }}
{{- developer_message }}
{{- "\n\n" }}
{%- endif %}
{%- if tools -%}
{{- "\n\n" }}
{{- "# Tools\n\n" }}
{{- render_tool_namespace("functions", tools) }}
{%- endif -%}