Reinforce-Ada:一种面向强化式大语言模型训练的自适应采样框架
Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training
October 6, 2025
作者: Wei Xiong, Chenlu Ye, Baohao Liao, Hanze Dong, Xinxing Xu, Christof Monz, Jiang Bian, Nan Jiang, Tong Zhang
cs.AI
摘要
強化學習應用於大型語言模型(LLMs)進行推理任務時,常因對提示詞的固定且均勻響應採樣而導致梯度估計不穩定,成為性能瓶頸。先前的研究如GVM-RAFT通過動態分配每個提示詞的推理預算,在預算約束下最小化隨機梯度方差來解決這一問題。受此啟發,我們提出了Reinforce-Ada,這是一種用於LLMs在線RL後訓練的自適應採樣框架,它持續將採樣努力重新分配給具有最大不確定性或學習潛力的提示詞。與傳統的兩階段分配方法不同,Reinforce-Ada在線連續淘汰過程中交織進行估計與採樣,並在收集到足夠信號時自動停止對某提示詞的採樣。為穩定更新,我們構建了具有強制獎勵多樣性的固定大小組,並利用自適應採樣階段聚合的全局統計數據計算優勢基線。多種模型架構和推理基準上的實證結果表明,與GRPO相比,Reinforce-Ada加速了收斂並提升了最終性能,尤其是在使用平衡採樣變體時。我們的工作強調了方差感知、自適應數據策劃在實現具備推理能力LLMs高效可靠強化學習中的核心作用。代碼可在https://github.com/RLHFlow/Reinforce-Ada獲取。
English
Reinforcement learning applied to large language models (LLMs) for reasoning
tasks is often bottlenecked by unstable gradient estimates due to fixed and
uniform sampling of responses across prompts. Prior work such as GVM-RAFT
addresses this by dynamically allocating inference budget per prompt to
minimize stochastic gradient variance under a budget constraint. Inspired by
this insight, we propose Reinforce-Ada, an adaptive sampling framework for
online RL post-training of LLMs that continuously reallocates sampling effort
to the prompts with the greatest uncertainty or learning potential. Unlike
conventional two-stage allocation methods, Reinforce-Ada interleaves estimation
and sampling in an online successive elimination process, and automatically
stops sampling for a prompt once sufficient signal is collected. To stabilize
updates, we form fixed-size groups with enforced reward diversity and compute
advantage baselines using global statistics aggregated over the adaptive
sampling phase. Empirical results across multiple model architectures and
reasoning benchmarks show that Reinforce-Ada accelerates convergence and
improves final performance compared to GRPO, especially when using the balanced
sampling variant. Our work highlights the central role of variance-aware,
adaptive data curation in enabling efficient and reliable reinforcement
learning for reasoning-capable LLMs. Code is available at
https://github.com/RLHFlow/Reinforce-Ada.