BroRL:透過擴展探索實現強化學習的規模化
BroRL: Scaling Reinforcement Learning via Broadened Exploration
October 1, 2025
作者: Jian Hu, Mingjie Liu, Ximing Lu, Fang Wu, Zaid Harchaoui, Shizhe Diao, Yejin Choi, Pavlo Molchanov, Jun Yang, Jan Kautz, Yi Dong
cs.AI
摘要
可驗證獎勵的強化學習(RLVR)已成為解鎖大型語言模型複雜推理能力的關鍵要素。近期研究ProRL通過增加訓練步數展現了擴展RL的潛力。然而,在數千步之後,性能趨於平穩,分配更多計算資源進行額外訓練的收益明顯遞減。在本研究中,我們探討了一種互補的RL擴展範式,即BroRL——將每個樣本的rollout次數增加至數百次,以徹底拓寬探索範圍,從而在ProRL觀察到的訓練步數擴展飽和點之後,仍能持續獲得性能提升。我們的方法基於質量平衡方程分析,使我們能夠描述強化過程中正確與錯誤標記概率質量的變化率。我們證明,在一步RL假設下,採樣的rollout標記始終促進正確質量的擴展,而rollout之外的未採樣標記則可能根據其分佈及淨獎勵平衡導致增益或損失。重要的是,隨著每個樣本的rollout次數N的增加,未採樣項的影響減弱,確保了整體正確質量的擴展。為驗證我們的理論分析,我們在更寬鬆的條件下進行了模擬,發現足夠大的rollout規模N——對應於充分的探索——能夠保證所有正確標記概率質量的增加。實證上,BroRL復甦了在3K步ProRL訓練後飽和的模型,並展現出穩健、持續的改進,在1.5B模型上於多樣化基準測試中取得了領先的成果。
English
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a key
ingredient for unlocking complex reasoning capabilities in large language
models. Recent work ProRL has shown promise in scaling RL by increasing the
number of training steps. However, performance plateaus after thousands of
steps, with clear diminishing returns from allocating more computation to
additional training. In this work, we investigate a complementary paradigm for
scaling RL, BroR-Lincreasing the number of rollouts per example to hundreds to
exhaustively Broaden exploration, which yields continuous performance gains
beyond the saturation point observed in ProRL when scaling the number of
training steps. Our approach is motivated by a mass balance equation analysis
allowing us to characterize the rate of change in probability mass for correct
and incorrect tokens during the reinforcement process. We show that under a
one-step RL assumption, sampled rollout tokens always contribute to
correct-mass expansion, while unsampled tokens outside rollouts may lead to
gains or losses depending on their distribution and the net reward balance.
Importantly, as the number of rollouts per example N increases, the effect of
unsampled terms diminishes, ensuring overall correct-mass expansion. To
validate our theoretical analysis, we conduct simulations under more relaxed
conditions and find that a sufficiently large rollout size N-corresponding to
ample exploration-guarantees an increase in the probability mass of all correct
tokens. Empirically, BroRL revives models saturated after 3K ProRL training
steps and demonstrates robust, continuous improvement, achieving
state-of-the-art results for the 1.5B model across diverse benchmarks.