ChatPaper.aiChatPaper

**MemSkill:面向自进化智能体的记忆技能学习与演进**

MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents

February 2, 2026
作者: Haozhen Zhang, Quanyu Long, Jianzhu Bao, Tao Feng, Weizhi Zhang, Haodong Yue, Wenya Wang
cs.AI

摘要

当前大多数大型语言模型(LLM)智能体记忆系统依赖于少量静态、人工设计的记忆提取操作。这些固定流程将人类对存储内容和记忆修订方式的先验认知固化其中,导致其在不同交互模式下缺乏灵活性,且在长历史场景中效率低下。为此,我们提出MemSkill方法,将记忆操作重构为可学习、可演化的记忆技能——即从交互轨迹中提取、整合和修剪信息的结构化可复用例程。受智能体技能设计理念启发,MemSkill采用控制器学习选择相关技能子集,并配合基于LLM的执行器生成技能引导的记忆。除学习技能选择策略外,MemSkill还引入设计器模块,定期审查因所选技能导致记忆错误或不完整的困难案例,通过优化现有技能和创建新技能来实现技能集的演化。MemSkill由此形成闭环流程,同步提升技能选择策略与技能集本身的质量。在LoCoMo、LongMemEval、HotpotQA和ALFWorld上的实验表明,MemSkill在任务表现上超越强基线方法,并具备良好的跨场景泛化能力。进一步分析揭示了技能演化机制,为构建更自适应、自演化的LLM智能体记忆管理系统提供了新思路。
English
Most Large Language Model (LLM) agent memory systems rely on a small set of static, hand-designed operations for extracting memory. These fixed procedures hard-code human priors about what to store and how to revise memory, making them rigid under diverse interaction patterns and inefficient on long histories. To this end, we present MemSkill, which reframes these operations as learnable and evolvable memory skills, structured and reusable routines for extracting, consolidating, and pruning information from interaction traces. Inspired by the design philosophy of agent skills, MemSkill employs a controller that learns to select a small set of relevant skills, paired with an LLM-based executor that produces skill-guided memories. Beyond learning skill selection, MemSkill introduces a designer that periodically reviews hard cases where selected skills yield incorrect or incomplete memories, and evolves the skill set by proposing refinements and new skills. Together, MemSkill forms a closed-loop procedure that improves both the skill-selection policy and the skill set itself. Experiments on LoCoMo, LongMemEval, HotpotQA, and ALFWorld demonstrate that MemSkill improves task performance over strong baselines and generalizes well across settings. Further analyses shed light on how skills evolve, offering insights toward more adaptive, self-evolving memory management for LLM agents.
PDF313February 7, 2026