天工開悟者1號技術報告
Skywork Open Reasoner 1 Technical Report
May 28, 2025
作者: Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, Yahui Zhou
cs.AI
摘要
DeepSeek-R1的成功突顯了強化學習(RL)在提升大型語言模型(LLMs)推理能力中的重要作用。在本研究中,我們提出了Skywork-OR1,這是一種針對長鏈思維(CoT)模型的有效且可擴展的RL實現。基於DeepSeek-R1-Distill模型系列,我們的RL方法取得了顯著的性能提升,將32B模型在AIME24、AIME25和LiveCodeBench上的平均準確率從57.8%提升至72.8%(+15.0%),並將7B模型的準確率從43.6%提升至57.5%(+13.9%)。我們的Skywork-OR1-32B模型在AIME24和AIME25基準測試中超越了DeepSeek-R1和Qwen3-32B,同時在LiveCodeBench上取得了相當的結果。Skywork-OR1-7B和Skywork-OR1-Math-7B模型在相似規模的模型中展示了競爭性的推理能力。我們對訓練管道的核心組件進行了全面的消融研究,以驗證其有效性。此外,我們深入研究了熵崩潰現象,識別了影響熵動態的關鍵因素,並證明減緩過早的熵崩潰對於提升測試性能至關重要。為支持社區研究,我們完全開源了模型權重、訓練代碼和訓練數據集。
English
The success of DeepSeek-R1 underscores the significant role of reinforcement
learning (RL) in enhancing the reasoning capabilities of large language models
(LLMs). In this work, we present Skywork-OR1, an effective and scalable RL
implementation for long Chain-of-Thought (CoT) models. Building on the
DeepSeek-R1-Distill model series, our RL approach achieves notable performance
gains, increasing average accuracy across AIME24, AIME25, and LiveCodeBench
from 57.8% to 72.8% (+15.0%) for the 32B model and from 43.6% to 57.5% (+13.9%)
for the 7B model. Our Skywork-OR1-32B model surpasses both DeepSeek-R1 and
Qwen3-32B on the AIME24 and AIME25 benchmarks, while achieving comparable
results on LiveCodeBench. The Skywork-OR1-7B and Skywork-OR1-Math-7B models
demonstrate competitive reasoning capabilities among models of similar size. We
perform comprehensive ablation studies on the core components of our training
pipeline to validate their effectiveness. Additionally, we thoroughly
investigate the phenomenon of entropy collapse, identify key factors affecting
entropy dynamics, and demonstrate that mitigating premature entropy collapse is
critical for improved test performance. To support community research, we fully
open-source our model weights, training code, and training datasets.Summary
AI-Generated Summary