ChatPaper.aiChatPaper

AdaMem:面向长程对话代理的自适应用户中心记忆系统

AdaMem: Adaptive User-Centric Memory for Long-Horizon Dialogue Agents

March 17, 2026
作者: Shannan Yan, Jingchen Ni, Leqi Zheng, Jiajun Zhang, Peixi Wu, Dacheng Yin, Jing Lyu, Chun Yuan, Fengyun Rao
cs.AI

摘要

大型语言模型(LLM)智能体日益依赖外部记忆来支持长程交互、个性化辅助和多步推理。然而,现有记忆系统仍面临三大核心挑战:过度依赖语义相似性,可能遗漏用户中心理解的关键证据;常将相关经验存储为孤立片段,削弱时序与因果连贯性;通常采用静态记忆粒度,难以适配不同问题的需求。我们提出AdaMem——面向长程对话智能体的自适应用户中心记忆框架。该框架将对话历史组织为工作记忆、情景记忆、角色记忆和图记忆,使系统能在统一架构下保存近期上下文、结构化长期经验、稳定用户特征及关系感知连接。在推理阶段,AdaMem首先确定目标参与者,随后构建问题导向的检索路径——仅在必要时将语义检索与关系感知图扩展相结合,最终通过专设的证据合成与响应生成角色管道生成答案。我们在长程推理与用户建模基准LoCoMo和PERSONAMEM上评估AdaMem,实验结果表明其在两项基准上均达到最先进性能。代码将在论文录用后开源。
English
Large language model (LLM) agents increasingly rely on external memory to support long-horizon interaction, personalized assistance, and multi-step reasoning. However, existing memory systems still face three core challenges: they often rely too heavily on semantic similarity, which can miss evidence crucial for user-centric understanding; they frequently store related experiences as isolated fragments, weakening temporal and causal coherence; and they typically use static memory granularities that do not adapt well to the requirements of different questions. We propose AdaMem, an adaptive user-centric memory framework for long-horizon dialogue agents. AdaMem organizes dialogue history into working, episodic, persona, and graph memories, enabling the system to preserve recent context, structured long-term experiences, stable user traits, and relation-aware connections within a unified framework. At inference time, AdaMem first resolves the target participant, then builds a question-conditioned retrieval route that combines semantic retrieval with relation-aware graph expansion only when needed, and finally produces the answer through a role-specialized pipeline for evidence synthesis and response generation. We evaluate AdaMem on the LoCoMo and PERSONAMEM benchmarks for long-horizon reasoning and user modeling. Experimental results show that AdaMem achieves state-of-the-art performance on both benchmarks. The code will be released upon acceptance.
PDF103March 20, 2026