OctoThinker:训练中期激励强化学习的规模化发展
OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling
June 25, 2025
作者: Zengzhi Wang, Fan Zhou, Xuefeng Li, Pengfei Liu
cs.AI
摘要
不同的基礎語言模型家族,如Llama和Qwen,在強化學習(RL)的後訓練階段表現出不同的行為,尤其是在推理密集型任務上。什麼樣的基礎語言模型適合強化學習?深入理解這一問題對於開發下一代可擴展的RL基礎模型至關重要。在本研究中,我們探討了中期訓練策略如何塑造RL動態,重點關注兩個代表性模型家族:Qwen和Llama。我們的研究揭示:(1)高質量的數學語料庫,如MegaMath-Web-Pro,顯著提升了基礎模型和RL的性能,而現有的替代品(如FineMath-4plus)未能做到這一點;(2)進一步添加問答風格數據,特別是長鏈式推理(CoT)示例,增強了RL效果,而指令數據進一步釋放了這一效應;(3)雖然長CoT提升了推理深度,但也可能導致模型回應冗長和RL訓練不穩定,這凸顯了數據格式化的重要性;(4)中期訓練的規模化持續帶來更強的下游RL性能。基於這些洞察,我們引入了一種兩階段中期訓練策略,即“穩定後衰減”,其中基礎模型首先在200B個詞元上以恆定學習率進行訓練,隨後在20B個詞元上跨三個CoT重點分支進行學習率衰減訓練。這產生了OctoThinker,一個展現出強大RL兼容性並縮小與更RL友好模型家族(如Qwen)性能差距的模型家族。我們希望我們的工作有助於在RL時代塑造基礎模型的預訓練策略。為了支持進一步研究,我們發布了開源模型以及一個精心策劃的超過700億詞元的數學推理密集型語料庫(即MegaMath-Web-Pro-Max)。
English
Different base language model families, such as Llama and Qwen, exhibit
divergent behaviors during post-training with reinforcement learning (RL),
especially on reasoning-intensive tasks. What makes a base language model
suitable for reinforcement learning? Gaining deeper insight into this question
is essential for developing RL-scalable foundation models of the next
generation. In this work, we investigate how mid-training strategies shape RL
dynamics, focusing on two representative model families: Qwen and Llama. Our
study reveals that (1) high-quality mathematical corpora, such as
MegaMath-Web-Pro, significantly improve both base model and RL performance,
while existing alternatives (e.g., FineMath-4plus) fail to do so; (2) further
adding QA-style data, particularly long chain-of-thought (CoT) reasoning
examples, enhances RL outcomes, and instruction data further unlocks this
effect; (3) while long-CoT improves reasoning depth, it can also induce
verbosity of model responses and unstability of RL training, underscoring the
importance of data formatting; (4) scaling mid-training consistently leads to
stronger downstream RL performance. Building on these insights, we introduce a
two-stage mid-training strategy, Stable-then-Decay, in which base models are
first trained on 200B tokens with a constant learning rate, followed by 20B
tokens across three CoT-focused branches with learning rate decay. This yields
OctoThinker, a family of models demonstrating strong RL compatibility and
closing the performance gap with more RL-friendly model families, i.e., Qwen.
We hope our work will help shape pre-training strategies for foundation models
in the RL era. To support further research, we release our open-source models
along with a curated math reasoning-intensive corpus of over 70 billion tokens
(i.e., MegaMath-Web-Pro-Max).