ChatPaper.aiChatPaper

SteP:用于Web操作的堆叠LLM策略

SteP: Stacked LLM Policies for Web Actions

October 5, 2023
作者: Paloma Sodhi, S. R. K. Branavan, Ryan McDonald
cs.AI

摘要

在网络上执行任务对大型语言模型(LLMs)提出了基本挑战,包括组合成大的开放世界任务和网络界面之间的变化。简单地指定一个大型提示来处理所有可能的行为和状态是极其复杂的,会导致不相关行为之间的行为泄漏。将任务分解为不同策略可以解决这一挑战,但需要在策略之间仔细地交接控制。我们提出了用于网络操作的堆叠LLM策略(SteP),这是一种动态组合策略以解决各种网络任务。SteP定义了一个马尔可夫决策过程,其中状态是表示控制状态的策略堆栈,即策略调用链。与传统方法只能使用静态层次结构不同,SteP实现了能够根据任务复杂性进行动态控制的功能。我们对SteP进行了多个基线和网络环境的评估,包括WebArena、MiniWoB++和CRM。在WebArena上,SteP相对于使用GPT-4策略的SOTA有所改进(14.9\%至33.5%),而在MiniWoB++上,SteP与先前工作相媲美,同时使用的数据量明显较少。我们的代码和数据可在https://asappresearch.github.io/webagents-step获取。
English
Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge, but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step.
PDF81December 15, 2024