ChatPaper.aiChatPaper

Memp:探索智能体程序性记忆

Memp: Exploring Agent Procedural Memory

August 8, 2025
作者: Runnan Fang, Yuan Liang, Xiaobin Wang, Jialong Wu, Shuofei Qiao, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang
cs.AI

摘要

基於大型語言模型(LLMs)的代理在多樣化任務中表現卓越,然而其程序性記憶脆弱,通常依賴於手動設計或固化於靜態參數中。本研究探討了賦予代理可學習、可更新且終身持續的程序性記憶的策略。我們提出了Memp,它將過去的代理軌跡提煉為細粒度的逐步指令及更高層次的腳本式抽象,並探索了程序性記憶的構建、檢索與更新等不同策略的影響。結合一套動態機制,該記憶庫不斷更新、修正並淘汰其內容,與新經驗同步進化。在TravelPlanner和ALFWorld上的實證評估顯示,隨著記憶庫的完善,代理在類似任務上的成功率穩步提升,效率也顯著提高。此外,由更強模型構建的程序性記憶保持其價值:將此記憶遷移至較弱模型時,能帶來顯著的性能提升。
English
Large Language Models (LLMs) based agents excel at diverse tasks, yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work, we investigate strategies to endow agents with a learnable, updatable, and lifelong procedural memory. We propose Memp that distills past agent trajectories into both fine-grained, step-by-step instructions and higher-level, script-like abstractions, and explore the impact of different strategies for Build, Retrieval, and Update of procedural memory. Coupled with a dynamic regimen that continuously updates, corrects, and deprecates its contents, this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined, agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover, procedural memory built from a stronger model retains its value: migrating the procedural memory to a weaker model yields substantial performance gains.
PDF274August 11, 2025