ChatPaper.aiChatPaper

引导知识密集型推理的流程奖励代理机制

Process Reward Agents for Steering Knowledge-Intensive Reasoning

April 10, 2026
作者: Jiwoong Sohn, Tomasz Sternal, Kenneth Styppa, Torsten Hoefler, Michael Moor
cs.AI

摘要

在知识密集型领域的推理任务中,由于中间步骤往往无法在局部验证其正确性(与数学或代码不同,评估步骤正确性可能需要综合大规模外部知识源中的线索),这使得推理过程仍面临挑战。因此,细微的错误可能在推理轨迹中传播,且难以被及时发现。先前研究提出了过程奖励模型(PRM)及其检索增强变体,但这些方法均属于事后评估机制,仅对完整推理轨迹进行评分,无法融入动态推理过程。本文提出过程奖励智能体(PRA),一种在测试时为目标策略提供基于领域知识的在线逐步骤奖励的方法。与现有检索增强PRM不同,PRA支持基于搜索的解码策略,能在每个生成步骤中对候选轨迹进行排序和剪枝。在多个医学推理基准测试上的实验表明,PRA始终优于强基线模型,在MedQA数据集上使用Qwen3-4B模型实现了80.8%的准确率,创造了4B参数规模的新纪录。值得注意的是,PRA可泛化应用于参数规模从0.5B到8B的未训练策略模型,在无需更新策略模型的情况下最高提升准确率25.7%。更广泛地说,PRA展示了一种新范式:将冻结的推理模型与领域特定奖励模块解耦,使得无需重新训练即可在复杂领域部署新型骨干模型。
English
Reasoning in knowledge-intensive domains remains challenging as intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants, but these methods operate post hoc, scoring completed trajectories, which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents (PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen policy. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.8% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Importantly, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to 25.7% without any policy model updates. More broadly, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific reward modules, allowing the deployment of new backbones in complex domains without retraining.
PDF22April 14, 2026