OctoThinker:中期训练激励强化学习的规模化发展
OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling
June 25, 2025
作者: Zengzhi Wang, Fan Zhou, Xuefeng Li, Pengfei Liu
cs.AI
摘要
不同基础语言模型家族,如Llama和Qwen,在强化学习(RL)后训练阶段展现出不同的行为特性,尤其是在推理密集型任务上。什么样的基础语言模型更适合强化学习?深入理解这一问题对于开发下一代可扩展RL的基础模型至关重要。本研究中,我们探讨了中期训练策略如何塑造RL动态,聚焦于两个代表性模型家族:Qwen与Llama。研究发现:(1)高质量数学语料,如MegaMath-Web-Pro,显著提升了基础模型及RL性能,而现有替代品(如FineMath-4plus)则未能达到同等效果;(2)进一步加入问答风格数据,特别是长链式思维(CoT)推理示例,能增强RL效果,且指令数据进一步释放了这一潜力;(3)尽管长CoT提升了推理深度,但也可能导致模型回答冗长及RL训练不稳定,凸显了数据格式化的重要性;(4)中期训练的规模扩展持续带来更强的下游RL性能。基于这些洞见,我们提出了一种两阶段中期训练策略——“稳定后衰减”,即基础模型先以恒定学习率训练200B tokens,随后在三个CoT重点分支上以学习率衰减方式训练20B tokens。由此诞生了OctoThinker模型家族,展现了优异的RL兼容性,并缩小了与更RL友好模型家族(如Qwen)的性能差距。我们期望本工作能为RL时代的基础模型预训练策略提供指导。为支持进一步研究,我们开源了模型及一个精选的超过700亿tokens的数学推理密集型语料库(即MegaMath-Web-Pro-Max)。
English
Different base language model families, such as Llama and Qwen, exhibit
divergent behaviors during post-training with reinforcement learning (RL),
especially on reasoning-intensive tasks. What makes a base language model
suitable for reinforcement learning? Gaining deeper insight into this question
is essential for developing RL-scalable foundation models of the next
generation. In this work, we investigate how mid-training strategies shape RL
dynamics, focusing on two representative model families: Qwen and Llama. Our
study reveals that (1) high-quality mathematical corpora, such as
MegaMath-Web-Pro, significantly improve both base model and RL performance,
while existing alternatives (e.g., FineMath-4plus) fail to do so; (2) further
adding QA-style data, particularly long chain-of-thought (CoT) reasoning
examples, enhances RL outcomes, and instruction data further unlocks this
effect; (3) while long-CoT improves reasoning depth, it can also induce
verbosity of model responses and unstability of RL training, underscoring the
importance of data formatting; (4) scaling mid-training consistently leads to
stronger downstream RL performance. Building on these insights, we introduce a
two-stage mid-training strategy, Stable-then-Decay, in which base models are
first trained on 200B tokens with a constant learning rate, followed by 20B
tokens across three CoT-focused branches with learning rate decay. This yields
OctoThinker, a family of models demonstrating strong RL compatibility and
closing the performance gap with more RL-friendly model families, i.e., Qwen.
We hope our work will help shape pre-training strategies for foundation models
in the RL era. To support further research, we release our open-source models
along with a curated math reasoning-intensive corpus of over 70 billion tokens
(i.e., MegaMath-Web-Pro-Max).