ChatPaper.aiChatPaper

Web-Shepherd:强化网络代理的PRM技术新进展

Web-Shepherd: Advancing PRMs for Reinforcing Web Agents

May 21, 2025
作者: Hyungjoo Chae, Sunghwan Kim, Junhee Cho, Seungone Kim, Seungjun Moon, Gyeom Hwangbo, Dongha Lim, Minjin Kim, Yeonjun Hwang, Minju Gwak, Dongwook Choi, Minseok Kang, Gwanhoon Im, ByeongUng Cho, Hyojun Kim, Jun Hee Han, Taeyoon Kwon, Minju Kim, Beong-woo Kwak, Dongjin Kang, Jinyoung Yeo
cs.AI

摘要

网页导航是一个独特的领域,能够自动化许多重复的现实任务,其挑战性在于需要超越典型多模态大语言模型(MLLM)任务的长期序列决策。然而,迄今为止,尚缺乏可在训练和测试期间使用的专门针对网页导航的奖励模型。尽管速度和成本效益至关重要,先前的研究却将MLLM用作奖励模型,这为实际部署带来了显著限制。为解决这一问题,本研究首次提出了名为Web-Shepherd的过程奖励模型(PRM),它能够在步骤级别评估网页导航轨迹。为此,我们首先构建了WebPRM Collection,这是一个包含4万步级别偏好对及跨多个领域和难度等级的标注清单的大规模数据集。接着,我们还引入了WebRewardBench,这是首个用于评估PRM的元评估基准。实验结果显示,与使用GPT-4o相比,我们的Web-Shepherd在WebRewardBench上的准确率提升了约30个百分点。此外,在WebArena-lite测试中,采用GPT-4o-mini作为策略模型并以Web-Shepherd作为验证器时,我们实现了比使用GPT-4o-mini作为验证器时高出10.9个百分点的性能提升,同时成本减少了10倍。我们的模型、数据集及代码已公开于LINK。
English
Web navigation is a unique domain that can automate many repetitive real-life tasks and is challenging as it requires long-horizon sequential decision making beyond typical multimodal large language model (MLLM) tasks. Yet, specialized reward models for web navigation that can be utilized during both training and test-time have been absent until now. Despite the importance of speed and cost-effectiveness, prior works have utilized MLLMs as reward models, which poses significant constraints for real-world deployment. To address this, in this work, we propose the first process reward model (PRM) called Web-Shepherd which could assess web navigation trajectories in a step-level. To achieve this, we first construct the WebPRM Collection, a large-scale dataset with 40K step-level preference pairs and annotated checklists spanning diverse domains and difficulty levels. Next, we also introduce the WebRewardBench, the first meta-evaluation benchmark for evaluating PRMs. In our experiments, we observe that our Web-Shepherd achieves about 30 points better accuracy compared to using GPT-4o on WebRewardBench. Furthermore, when testing on WebArena-lite by using GPT-4o-mini as the policy and Web-Shepherd as the verifier, we achieve 10.9 points better performance, in 10 less cost compared to using GPT-4o-mini as the verifier. Our model, dataset, and code are publicly available at LINK.

Summary

AI-Generated Summary

PDF814May 22, 2025