diff --git a/README.md b/README.md
index e75bc15..03c4f60 100644
--- a/README.md
+++ b/README.md
@@ -4,12 +4,13 @@
-
-
-
+
+
+
+
-
-
+
+
@@ -55,15 +56,24 @@ As an open-source project, OmniGen2 provides a powerful yet resource-efficient f
## ๐ฅ News
-- **2025-06-16**: [Gradio](https://github.com/VectorSpaceLab/OmniGen2?tab=readme-ov-file#-gradio-demo) and [Jupyter](https://github.com/VectorSpaceLab/OmniGen2/blob/main/example.ipynb) demo is available.
+- **2025-07-05**: Training datasets [X2I2](https://huggingface.co/datasets/OmniGen2/X2I2) are available.
+- **2025-07-03**: OmniGen2 now supports [TeaCache](https://github.com/ali-vilab/TeaCache) and [TaylorSeer](https://github.com/Shenyi-Z/TaylorSeer) for faster inference, see [Usage Tips](#-usage-tips) for details. Thanks @legitnull for great [TeaCache-PR](https://github.com/VectorSpaceLab/OmniGen2/pull/52) and [TaylorSeer-PR](https://github.com/VectorSpaceLab/OmniGen2/pull/76).
+- **2025-07-01**: OmniGen2 is supported by [ComfyUI official](https://comfyanonymous.github.io/ComfyUI_examples/omnigen), thanks !!
+- **2025-06-30**: Training code is available, see [fine-tuning](docs/FINETUNE.md) for details.
+- **2025-06-28**: We release [OmniContext](https://huggingface.co/datasets/OmniGen2/OmniContext) benchmark. The evaluation codes are in [omnicontext](https://github.com/VectorSpaceLab/OmniGen2/tree/main/omnicontext).
+- **2025-06-24**: [Technical Report](https://arxiv.org/abs/2506.18871) is available.
+- **2025-06-23**: Weโve updated our code and HF modelโOmniGen2 now runs *without* `flash-attn`. Users can still install it for optimal performance.
+- **2025-06-20**: Updated [resource requirements](#-resources-requirement), adding CPU offload support for devices with limited VRAM.
+- **2025-06-16**: [Gradio](https://github.com/VectorSpaceLab/OmniGen2?tab=readme-ov-file#-gradio-demo) and [Jupyter](https://github.com/VectorSpaceLab/OmniGen2/blob/main/example.ipynb) is available. Online Gradio Demo: [Demo1](https://9c4426d27c3b9ecbed.gradio.live); [Chat-Demo1](https://0351497834a4d7226c.gradio.live); see more demo links in [gradio section](https://github.com/VectorSpaceLab/OmniGen2?tab=readme-ov-file#-gradio-demo)
- **2025-06-16**: We release **OmniGen2**, a multimodal generation model, model weights can be accessed in [huggingface](https://huggingface.co/OmniGen2/OmniGen2) and [modelscope](https://www.modelscope.cn/models/OmniGen2/OmniGen2).
## ๐ TODO
-- [ ] Technical report.
-- [ ] In-context generation benchmark: **OmniContext**.
-- [ ] Support CPU offload and improve inference efficiency.
-- [ ] Training data and scripts.
-- [ ] Data construction pipeline.
+- [x] Technical report.
+- [x] Support CPU offload and improve inference efficiency.
+- [x] In-context generation benchmark: **OmniContext**.
+- [ ] Integration of diffusers.
+- [x] Training datasets.
+- [ ] Training data construction pipeline.
- [ ] ComfyUI Demo (**commuity support will be greatly appreciated!**).
## ๐ Quick Start