ChatPaper.aiChatPaper

超越正确性:通过强化学习协调过程与结果奖励

Beyond Correctness: Harmonizing Process and Outcome Rewards through RL Training

September 3, 2025
作者: Chenlu Ye, Zhou Yu, Ziji Zhang, Hao Chen, Narayanan Sadagopan, Jing Huang, Tong Zhang, Anurag Beniwal
cs.AI

摘要

可验证奖励的强化学习(RLVR)已成为数学推理任务的主导范式,在推理能力上提供了稳定的提升。然而,RLVR中的结果奖励模型(ORMs)过于粗粒度,无法区分正确答案中的错误推理或错误答案中的有效推理。这种粒度的缺失显著引入了噪声和误导性梯度,阻碍了推理过程质量的进一步提升。虽然过程奖励模型(PRMs)为中间步骤提供了细粒度的指导,但它们常常存在不准确性,并且容易受到奖励操控的影响。 为解决这一困境,我们引入了过程一致性过滤器(PROF),这是一种有效的数据处理优化方法,它协调了噪声大、细粒度的过程奖励与准确但粗粒度的结果奖励。与在目标函数中简单混合PRM和ORM(arXiv:archive/2506.18896)不同,PROF通过一致性驱动的样本选择,充分利用了它们的互补优势。我们的方法保留了过程值较高的正确响应和过程值较低的错误响应,同时保持了正负训练样本的平衡。大量实验表明,我们的方法不仅比混合方法持续提高了超过4%的最终准确率,还增强了中间推理步骤的质量。代码和训练配方可在https://github.com/Chenluye99/PROF获取。
English
Reinforcement learning with verifiable rewards (RLVR) has emerged to be a predominant paradigm for mathematical reasoning tasks, offering stable improvements in reasoning ability. However, Outcome Reward Models (ORMs) in RLVR are too coarse-grained to distinguish flawed reasoning within correct answers or valid reasoning within incorrect answers. This lack of granularity introduces noisy and misleading gradients significantly and hinders further progress in reasoning process quality. While Process Reward Models (PRMs) offer fine-grained guidance for intermediate steps, they frequently suffer from inaccuracies and are susceptible to reward hacking. To resolve this dilemma, we introduce PRocess cOnsistency Filter (PROF), an effective data process curation method that harmonizes noisy, fine-grained process rewards with accurate, coarse-grained outcome rewards. Rather than naively blending PRM and ORM in the objective function (arXiv:archive/2506.18896), PROF leverages their complementary strengths through consistency-driven sample selection. Our approach retains correct responses with higher averaged process values and incorrect responses with lower averaged process values, while maintaining positive/negative training sample balance. Extensive experiments demonstrate that our method not only consistently improves the final accuracy over 4% compared to the blending approaches, but also strengthens the quality of intermediate reasoning steps. Codes and training recipes are available at https://github.com/Chenluye99/PROF.
PDF181September 5, 2025