ChatPaper.aiChatPaper

结构化情景事件记忆

Structured Episodic Event Memory

January 10, 2026
作者: Zhengxuan Lu, Dongfang Li, Yukun Shi, Beilun Wang, Longyue Wang, Baotian Hu
cs.AI

摘要

当前大型语言模型(LLM)的记忆机制主要依赖静态检索增强生成(RAG)方法,这种方法往往导致检索内容碎片化,难以捕捉复杂推理所需的结构化依赖关系。对于自主智能体而言,这类被动且扁平化的架构缺乏对长期交互动态关联特性进行建模所需的认知组织能力。为此,我们提出结构化情景事件记忆(SEEM)框架——一种融合关系事实图谱记忆层与叙事演进动态情景记忆层的分层架构。该框架基于认知框架理论,将交互流转化为由精确溯源指针锚定的结构化情景事件框架(EEF)。此外,我们引入智能关联融合机制与反向溯源扩展(RPE)技术,从碎片化证据中重构连贯的叙事语境。在LoCoMo和LongMemEval基准测试上的实验结果表明,SEEM显著优于基线模型,使智能体能够保持卓越的叙事连贯性与逻辑一致性。
English
Current approaches to memory in Large Language Models (LLMs) predominantly rely on static Retrieval-Augmented Generation (RAG), which often results in scattered retrieval and fails to capture the structural dependencies required for complex reasoning. For autonomous agents, these passive and flat architectures lack the cognitive organization necessary to model the dynamic and associative nature of long-term interaction. To address this, we propose Structured Episodic Event Memory (SEEM), a hierarchical framework that synergizes a graph memory layer for relational facts with a dynamic episodic memory layer for narrative progression. Grounded in cognitive frame theory, SEEM transforms interaction streams into structured Episodic Event Frames (EEFs) anchored by precise provenance pointers. Furthermore, we introduce an agentic associative fusion and Reverse Provenance Expansion (RPE) mechanism to reconstruct coherent narrative contexts from fragmented evidence. Experimental results on the LoCoMo and LongMemEval benchmarks demonstrate that SEEM significantly outperforms baselines, enabling agents to maintain superior narrative coherence and logical consistency.
PDF43January 31, 2026