ChatPaper.aiChatPaper

Memp:探索智能体程序性记忆

Memp: Exploring Agent Procedural Memory

August 8, 2025
作者: Runnan Fang, Yuan Liang, Xiaobin Wang, Jialong Wu, Shuofei Qiao, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang
cs.AI

摘要

基于大语言模型(LLMs)的代理在多样化任务中表现出色,但其脆弱的程序记忆依赖于手动设计或固化于静态参数之中。本研究探讨了赋予代理可学习、可更新及终身持续的程序记忆的策略。我们提出了Memp方法,它将代理过往的执行轨迹提炼为细粒度的逐步指令与更高层次的脚本式抽象,并深入研究了程序记忆的构建、检索与更新等不同策略的影响。结合一套动态机制,该记忆库能够持续更新、修正及淘汰内容,与新经验同步演进。在TravelPlanner和ALFWorld上的实证评估显示,随着记忆库的不断优化,代理在类似任务上的成功率稳步提升,执行效率显著增强。此外,由更强模型构建的程序记忆具有持久价值:将其迁移至较弱模型时,能带来显著的性能提升。
English
Large Language Models (LLMs) based agents excel at diverse tasks, yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work, we investigate strategies to endow agents with a learnable, updatable, and lifelong procedural memory. We propose Memp that distills past agent trajectories into both fine-grained, step-by-step instructions and higher-level, script-like abstractions, and explore the impact of different strategies for Build, Retrieval, and Update of procedural memory. Coupled with a dynamic regimen that continuously updates, corrects, and deprecates its contents, this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined, agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover, procedural memory built from a stronger model retains its value: migrating the procedural memory to a weaker model yields substantial performance gains.
PDF263August 11, 2025