ChatPaper.aiChatPaper

WebArbiter:面向网络智能体的原则引导型推理过程奖励模型

WebArbiter: A Principle-Guided Reasoning Process Reward Model for Web Agents

January 29, 2026
作者: Yao Zhang, Shijie Tang, Zeyu Li, Zhen Han, Volker Tresp
cs.AI

摘要

Web智能体在自动化复杂计算机任务方面潜力巨大,但其交互过程涉及具有不可逆操作的长期序贯决策。在此类场景中,基于结果的监督信号稀疏且延迟,常常错误奖励错误轨迹且无法支持推理时扩展。这促使研究者采用过程奖励模型(WebPRMs)进行网络导航,但现有方法仍存在局限:标量化WebPRMs将进展压缩为粗糙的弱基础信号,而清单式WebPRMs依赖脆弱的模板匹配,在布局或语义变化时失效,且常将表面正确的动作误判为成功,缺乏可解释性。为解决这些挑战,我们提出WebArbiter——一种推理优先、原则诱导的WebPRM,将奖励建模构建为文本生成任务,生成包含偏好结论的结构化论证,并识别当前情境下最有利于任务完成的动作。训练采用两阶段流程:推理蒸馏使模型掌握连贯的原则指导推理,强化学习通过直接对齐结论与正确性来修正教师偏见,从而实现更强泛化能力。为支持系统评估,我们发布WebPRMBench综合基准,涵盖四个多样化网络环境,包含丰富任务和高质量偏好标注。在WebPRMBench上,WebArbiter-7B以9.1分优势超越最强基线GPT-5;在WebArena-Lite的奖励引导轨迹搜索中,其表现较最佳现有WebPRM提升达7.2分,彰显了其在现实复杂网络任务中的鲁棒性与实用价值。
English
Web agents hold great potential for automating complex computer tasks, yet their interactions involve long-horizon, sequential decision-making with irreversible actions. In such settings, outcome-based supervision is sparse and delayed, often rewarding incorrect trajectories and failing to support inference-time scaling. This motivates the use of Process Reward Models (WebPRMs) for web navigation, but existing approaches remain limited: scalar WebPRMs collapse progress into coarse, weakly grounded signals, while checklist-based WebPRMs rely on brittle template matching that fails under layout or semantic changes and often mislabels superficially correct actions as successful, providing little insight or interpretability. To address these challenges, we introduce WebArbiter, a reasoning-first, principle-inducing WebPRM that formulates reward modeling as text generation, producing structured justifications that conclude with a preference verdict and identify the action most conducive to task completion under the current context. Training follows a two-stage pipeline: reasoning distillation equips the model with coherent principle-guided reasoning, and reinforcement learning corrects teacher biases by directly aligning verdicts with correctness, enabling stronger generalization. To support systematic evaluation, we release WebPRMBench, a comprehensive benchmark spanning four diverse web environments with rich tasks and high-quality preference annotations. On WebPRMBench, WebArbiter-7B outperforms the strongest baseline, GPT-5, by 9.1 points. In reward-guided trajectory search on WebArena-Lite, it surpasses the best prior WebPRM by up to 7.2 points, underscoring its robustness and practical value in real-world complex web tasks.
PDF02January 31, 2026