ChatPaper.aiChatPaper

V_{0.5}:作为稀疏强化学习 rollout 先验的通用价值模型

V_{0.5}: Generalist Value Model as a Prior for Sparse RL Rollouts

March 11, 2026
作者: Yi-Kai Zhang, Yueqing Sun, Hongyan Hao, Qi Gu, Xunliang Cai, De-Chuan Zhan, Han-Jia Ye
cs.AI

摘要

在可验证奖励的强化学习(RLVR)框架中,构建稳健的优势基线对策略梯度方法至关重要,它能有效引导策略模型强化期望行为。近期研究提出的通用价值模型(如V_0)通过显式编码模型上下文能力实现预训练价值估计,无需与策略模型同步更新。本文提出V_{0.5}方法,自适应融合此类价值模型的基线预测(作为先验)与稀疏 rollout 获得的经验均值,构建出兼顾计算效率与极低方差的稳健基线。具体而言,我们引入实时统计检验与动态预算分配机制,平衡稀疏采样引起的高方差与价值模型先验固有的系统偏差(或幻觉)。通过假设检验实时评估先验可靠性,系统按需动态分配额外 rollout 预算。该机制显著降低了基线估计器的均方误差(MSE),即使在组大小为4的极端稀疏条件下仍能保证策略梯度的稳定性。在六个数学推理基准上的广泛实验表明,V_{0.5}显著优于GRPO和DAPO,实现了更快的收敛速度与约10%的性能提升。
English
In Reinforcement Learning with Verifiable Rewards (RLVR), constructing a robust advantage baseline is critical for policy gradients, effectively guiding the policy model to reinforce desired behaviors. Recent research has introduced Generalist Value Models (such as V_0), which achieve pre-trained value estimation by explicitly encoding model capabilities in-context, eliminating the need to synchronously update the value model alongside the policy model. In this paper, we propose V_{0.5}, which adaptively fuses the baseline predicted by such value model (acting as a prior) with the empirical mean derived from sparse rollouts. This constructs a robust baseline that balances computational efficiency with extremely low variance. Specifically, we introduce a real-time statistical testing and dynamic budget allocation. This balances the high variance caused by sparse sampling against the systematic bias (or hallucinations) inherent in the value model's prior. By constructing a hypothesis test to evaluate the prior's reliability in real-time, the system dynamically allocates additional rollout budget on demand. This mechanism minimizes the baseline estimator's Mean Squared Error (MSE), guaranteeing stable policy gradients, even under extreme sparsity with a group size of 4. Extensive evaluations across six mathematical reasoning benchmarks demonstrate that V_{0.5} significantly outperforms GRPO and DAPO, achieving faster convergence and over some 10% performance improvement.
PDF60March 13, 2026