引导知识密集型推理的流程奖励代理机制
Process Reward Agents for Steering Knowledge-Intensive Reasoning
April 10, 2026
作者: Jiwoong Sohn, Tomasz Sternal, Kenneth Styppa, Torsten Hoefler, Michael Moor
cs.AI
摘要
在知识密集型领域进行推理仍具挑战性,因为中间步骤往往无法局部验证:与数学或代码不同,评估步骤正确性可能需要综合来自大型外部知识源的线索。这导致细微错误可能通过推理链传播,且难以被察觉。先前研究提出了过程奖励模型(PRM)及其检索增强变体,但这些方法均采用事后评分机制,无法融入动态推理过程。本文提出过程奖励智能体(PRA),一种在推理时为冻结策略提供领域化、在线式、逐步骤奖励的新方法。与现有检索增强PRM不同,PRA支持基于搜索的解码方式,能在每个生成步骤对候选推理路径进行排序和剪枝。在多个医学推理基准测试上的实验表明,PRA始终优于强基线模型,在MedQA数据集上使用千问3-4B模型达到80.8%的准确率,创下4B参数规模的新纪录。值得注意的是,PRA可泛化至参数规模从0.5B到8B的未知冻结策略模型,无需更新策略模型即可最高提升25.7%的准确率。更广泛地说,PRA开创了将冻结推理器与领域特定奖励模块解耦的新范式,使得新骨干模型无需重新训练即可部署于复杂领域。
English
Reasoning in knowledge-intensive domains remains challenging as intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants, but these methods operate post hoc, scoring completed trajectories, which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents (PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen policy. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.8% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Importantly, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to 25.7% without any policy model updates. More broadly, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific reward modules, allowing the deployment of new backbones in complex domains without retraining.