PRL:过程奖励学习提升大语言模型推理能力并拓宽推理边界
PRL: Process Reward Learning Improves LLMs' Reasoning Ability and Broadens the Reasoning Boundary
January 15, 2026
作者: Jiarui Yao, Ruida Wang, Tong Zhang
cs.AI
摘要
近期,提升大语言模型(LLMs)的推理能力持续成为研究热点。然而现有工作大多基于轨迹层面的结果奖励,缺乏对推理过程的细粒度监督。其他试图融合过程信号来优化LLMs的训练框架,也严重依赖蒙特卡洛树搜索(MCTS)、训练独立奖励模型等繁琐附加步骤,降低了训练效率。此外,过程信号设计的理论依据不足,导致优化机制的理解仍不清晰。本文提出过程奖励学习(PRL)方法,将熵正则化的强化学习目标分解至中间推理步骤,并通过严格推导的过程奖励对模型进行逐级优化。我们从理论动机出发,推导出与"奖励最大化+策略模型与参考模型间KL散度惩罚项"目标本质等效的PRL formulation。但PRL能将结果奖励转化为过程监督信号,更好指导RL优化过程中的探索行为。实验结果表明,PRL不仅能通过平均@n指标提升LLMs推理能力的整体表现,还能通过改进pass@n指标拓宽推理能力边界。大量实验验证了PRL方法的有效性和泛化性。
English
Improving the reasoning abilities of Large Language Models (LLMs) has been a continuous topic recently. But most relevant works are based on outcome rewards at the trajectory level, missing fine-grained supervision during the reasoning process. Other existing training frameworks that try to combine process signals together to optimize LLMs also rely heavily on tedious additional steps like MCTS, training a separate reward model, etc., doing harm to the training efficiency. Moreover, the intuition behind the process signals design lacks rigorous theoretical support, leaving the understanding of the optimization mechanism opaque. In this paper, we propose Process Reward Learning (PRL), which decomposes the entropy regularized reinforcement learning objective into intermediate steps, with rigorous process rewards that could be assigned to models accordingly. Starting from theoretical motivation, we derive the formulation of PRL that is essentially equivalent to the objective of reward maximization plus a KL-divergence penalty term between the policy model and a reference model. However, PRL could turn the outcome reward into process supervision signals, which helps better guide the exploration during RL optimization. From our experiment results, we demonstrate that PRL not only improves the average performance for LLMs' reasoning ability measured by average @ n, but also broadens the reasoning boundary by improving the pass @ n metric. Extensive experiments show the effectiveness of PRL could be verified and generalized.