ChatPaper.aiChatPaper

大型語言模型代理的失敗之處及其從失敗中學習的方法

Where LLM Agents Fail and How They can Learn From Failures

September 29, 2025
作者: Kunlun Zhu, Zijia Liu, Bingxuan Li, Muxin Tian, Yingxuan Yang, Jiaxun Zhang, Pengrui Han, Qipeng Xie, Fuyang Cui, Weijia Zhang, Xiaoteng Ma, Xiaodong Yu, Gowtham Ramesh, Jialian Wu, Zicheng Liu, Pan Lu, James Zou, Jiaxuan You
cs.AI

摘要

大型语言模型(LLM)代理通过整合规划、记忆、反思和工具使用模块,在解决复杂多步骤任务方面展现出潜力。然而,其复杂的架构也放大了级联故障的脆弱性,即单一根本原因的错误会通过后续决策传播,最终导致任务失败。现有系统缺乏一个能够以模块化和系统化方式全面理解代理错误的框架,因此无法相应地检测这些错误。我们通过三项贡献来填补这一空白。首先,我们引入了AgentErrorTaxonomy,这是一个涵盖记忆、反思、规划、行动和系统级操作的故障模式模块化分类。其次,我们构建了AgentErrorBench,这是首个从ALFWorld、GAIA和WebShop中系统标注的故障轨迹数据集,将错误分析建立在真实世界代理运行的基础上。第三,我们提出了AgentDebug,一个调试框架,能够隔离根本原因故障并提供纠正反馈,使代理能够恢复并迭代改进。在AgentErrorBench上的实验表明,与最强基线相比,AgentDebug在全正确准确率上提高了24%,在步骤准确率上提高了17%。除了检测之外,AgentDebug生成的针对性反馈使LLM代理能够从故障中迭代恢复,在ALFWorld、GAIA和WebShop中的任务成功率相对提升了高达26%。这些结果表明,基于原则的调试是实现更可靠和自适应LLM代理的途径。代码和数据将在https://github.com/ulab-uiuc/AgentDebug上提供。
English
Large Language Model (LLM) agents, which integrate planning, memory, reflection, and tool-use modules, have shown promise in solving complex, multi-step tasks. Yet their sophisticated architectures amplify vulnerability to cascading failures, where a single root-cause error propagates through subsequent decisions, leading to task failure. Current systems lack a framework that can comprehensively understand agent error in a modular and systemic way, and therefore fail to detect these errors accordingly. We address this gap with three contributions. First, we introduce the AgentErrorTaxonomy, a modular classification of failure modes spanning memory, reflection, planning, action, and system-level operations. Second, we construct AgentErrorBench, the first dataset of systematically annotated failure trajectories from ALFWorld, GAIA, and WebShop, grounding error analysis in real-world agent rollouts. Third, we propose AgentDebug, a debugging framework that isolates root-cause failures and provides corrective feedback, enabling agents to recover and iteratively improve. Experiments on AgentErrorBench show that AgentDebug achieves 24% higher all-correct accuracy and 17% higher step accuracy compared to the strongest baseline. Beyond detection, the targeted feedback generated by AgentDebug enables LLM agents to iteratively recover from failures, yielding up to 26% relative improvements in task success across ALFWorld, GAIA, and WebShop. These results establish principled debugging as a pathway to more reliable and adaptive LLM agents. The code and data will be available at https://github.com/ulab-uiuc/AgentDebug
PDF91October 1, 2025