ChatPaper.aiChatPaper

V_{0.5}:作为稀疏强化学习 rollout 先验的通用价值模型

V_{0.5}: Generalist Value Model as a Prior for Sparse RL Rollouts

March 11, 2026
作者: Yi-Kai Zhang, Yueqing Sun, Hongyan Hao, Qi Gu, Xunliang Cai, De-Chuan Zhan, Han-Jia Ye
cs.AI

摘要

在可驗證獎勵的強化學習(RLVR)框架中,構建穩健的優勢基線對策略梯度至關重要,能有效引導策略模型強化預期行為。近期研究提出的通用價值模型(如V_0)通過在上下文中顯式編碼模型能力,實現無需與策略模型同步更新的預訓練價值估計。本文提出V_{0.5}模型,自適應融合此類價值模型的預測基線(作為先驗)與稀疏抽樣得到的經驗均值,構建出兼具計算效率與極低方差的穩健基線。具體而言,我們引入實時統計檢驗與動態預算分配機制,平衡稀疏採樣的高方差與價值模型先驗固有的系統偏差(或幻覺)。通過假設檢驗實時評估先驗可靠性,系統能按需動態分配額外抽樣預算。該機制使基線估計器的均方誤差(MSE)最小化,即使在分組大小為4的極端稀疏條件下,仍能保證策略梯度的穩定性。在六個數學推理基準上的廣泛實驗表明,V_{0.5}顯著優於GRPO與DAPO,實現更快的收斂速度與約10%以上的性能提升。
English
In Reinforcement Learning with Verifiable Rewards (RLVR), constructing a robust advantage baseline is critical for policy gradients, effectively guiding the policy model to reinforce desired behaviors. Recent research has introduced Generalist Value Models (such as V_0), which achieve pre-trained value estimation by explicitly encoding model capabilities in-context, eliminating the need to synchronously update the value model alongside the policy model. In this paper, we propose V_{0.5}, which adaptively fuses the baseline predicted by such value model (acting as a prior) with the empirical mean derived from sparse rollouts. This constructs a robust baseline that balances computational efficiency with extremely low variance. Specifically, we introduce a real-time statistical testing and dynamic budget allocation. This balances the high variance caused by sparse sampling against the systematic bias (or hallucinations) inherent in the value model's prior. By constructing a hypothesis test to evaluate the prior's reliability in real-time, the system dynamically allocates additional rollout budget on demand. This mechanism minimizes the baseline estimator's Mean Squared Error (MSE), guaranteeing stable policy gradients, even under extreme sparsity with a group size of 4. Extensive evaluations across six mathematical reasoning benchmarks demonstrate that V_{0.5} significantly outperforms GRPO and DAPO, achieving faster convergence and over some 10% performance improvement.
PDF60March 13, 2026