ChatPaper.aiChatPaper

PRInTS:面向长程信息搜索的奖励建模

PRInTS: Reward Modeling for Long-Horizon Information Seeking

November 24, 2025
作者: Jaewoo Lee, Archiki Prasad, Justin Chih-Yao Chen, Zaid Khan, Elias Stengel-Eskin, Mohit Bansal
cs.AI

摘要

信息寻求是智能代理的核心能力,要求其在长轨迹任务中收集并推理工具生成的信息。然而,对于基于语言模型的代理而言,这类多步骤信息寻求任务仍具挑战性。虽然过程奖励模型(PRM)可通过在测试时对候选步骤排序来指导代理,但现有PRM专为二元判断的短程推理设计,既无法捕捉信息寻求步骤的丰富维度(如工具交互和工具输出推理),也难以处理长视野任务中快速增长的上下文。为解决这些局限,我们提出PRInTS——一种具备双重能力的生成式PRM:(1)基于模型在多维度步骤质量(如工具输出解读、工具调用信息量)上的推理进行密集评分;(2)通过轨迹摘要压缩增长中的上下文,同时保留步骤评估所需的关键信息。在FRAMES、GAIA(1-3级)和WebWalkerQA(易-难)基准上的多模型广泛评估及消融实验表明,采用PRInTS的n选优采样能增强开源模型与专用代理的信息寻求能力,使小型骨干代理达到甚至超越前沿模型性能,并优于其他强奖励模型基线。
English
Information-seeking is a core capability for AI agents, requiring them to gather and reason over tool-generated information across long trajectories. However, such multi-step information-seeking tasks remain challenging for agents backed by language models. While process reward models (PRMs) can guide agents by ranking candidate steps at test-time, existing PRMs, designed for short reasoning with binary judgment, cannot capture richer dimensions of information-seeking steps, such as tool interactions and reasoning over tool outputs, nor handle the rapidly growing context in long-horizon tasks. To address these limitations, we introduce PRInTS, a generative PRM trained with dual capabilities: (1) dense scoring based on the PRM's reasoning across multiple step quality dimensions (e.g., interpretation of tool outputs, tool call informativeness) and (2) trajectory summarization that compresses the growing context while preserving essential information for step evaluation. Extensive evaluations across FRAMES, GAIA (levels 1-3), and WebWalkerQA (easy-hard) benchmarks on multiple models, along with ablations, reveal that best-of-n sampling with PRInTS enhances information-seeking abilities of open-source models as well as specialized agents, matching or surpassing the performance of frontier models with a much smaller backbone agent and outperforming other strong reward modeling baselines.
PDF62December 3, 2025