ChatPaper.aiChatPaper

AutoLibra:基于开放式反馈的智能体指标归纳

AutoLibra: Agent Metric Induction from Open-Ended Feedback

May 5, 2025
作者: Hao Zhu, Phil Cuvin, Xinkai Yu, Charlotte Ka Yee Yan, Jason Zhang, Diyi Yang
cs.AI

摘要

智能体主要通过任务成功率进行评估和优化,然而这些指标较为粗糙,依赖专家手动设计,且无法奖励中间涌现的行为。我们提出了AutoLibra,一个智能体评估框架,它将开放式的人类反馈(例如,“如果发现按钮被禁用,就不要再点击它”,或“该智能体在自主决策上拥有过多自主权”)转化为评估智能体轨迹中细粒度行为的指标。AutoLibra通过将反馈与智能体行为关联、聚类相似的正负面行为,并创建具有明确定义和具体示例的指标来实现这一点,这些指标可用于引导作为评判者的LLM进行评估。我们进一步提出了两个元指标来评估一组(诱导出的)指标与开放反馈的一致性:“覆盖率”和“冗余度”。通过优化这些元指标,我们实验性地证明了AutoLibra能够诱导出比以往智能体评估基准中提出的更为具体的评估指标,并发现新的指标来分析智能体。我们还展示了AutoLibra在智能体改进中的两个应用:首先,我们表明,在广泛的文本游戏任务中,AutoLibra诱导的指标作为提示工程目标优于任务成功率,将智能体性能较基线平均提升了20%。其次,我们展示了AutoLibra能够迭代选择高质量微调数据用于网页导航智能体。我们的结果表明,AutoLibra是一个强大的任务无关工具,用于评估和改进语言智能体。
English
Agents are predominantly evaluated and optimized via task success metrics, which are coarse, rely on manual design from experts, and fail to reward intermediate emergent behaviors. We propose AutoLibra, a framework for agent evaluation, that transforms open-ended human feedback, e.g., "If you find that the button is disabled, don't click it again", or "This agent has too much autonomy to decide what to do on its own", into metrics for evaluating fine-grained behaviors in agent trajectories. AutoLibra accomplishes this by grounding feedback to an agent's behavior, clustering similar positive and negative behaviors, and creating concrete metrics with clear definitions and concrete examples, which can be used for prompting LLM-as-a-Judge as evaluators. We further propose two meta-metrics to evaluate the alignment of a set of (induced) metrics with open feedback: "coverage" and "redundancy". Through optimizing these meta-metrics, we experimentally demonstrate AutoLibra's ability to induce more concrete agent evaluation metrics than the ones proposed in previous agent evaluation benchmarks and discover new metrics to analyze agents. We also present two applications of AutoLibra in agent improvement: First, we show that AutoLibra-induced metrics serve as better prompt-engineering targets than the task success rate on a wide range of text game tasks, improving agent performance over baseline by a mean of 20%. Second, we show that AutoLibra can iteratively select high-quality fine-tuning data for web navigation agents. Our results suggest that AutoLibra is a powerful task-agnostic tool for evaluating and improving language agents.

Summary

AI-Generated Summary

PDF22May 8, 2025