记忆智能体
Memory Intelligence Agent
April 6, 2026
作者: Jingyang Qiao, Weicheng Meng, Yu Cheng, Zhihang Lin, Zhizhong Zhang, Xin Tan, Jingyu Gong, Kun Shao, Yuan Xie
cs.AI
摘要
深度研究智能体(DRA)将大语言模型推理能力与外部工具相融合。记忆系统使DRA能够利用历史经验,这对高效推理和自主演进至关重要。现有方法依赖从记忆中检索相似轨迹来辅助推理,但存在记忆演进低效、存储与检索成本递增的核心局限。为解决这些问题,我们提出新型记忆智能体(MIA)框架,采用管理者-规划者-执行者三层架构:记忆管理者作为非参数化记忆系统,可存储压缩后的历史搜索轨迹;规划者是参数化记忆智能体,能针对问题生成搜索方案;执行者则是根据搜索方案进行信息检索与分析的另一智能体。为构建MIA框架,我们首先采用交替强化学习范式加强规划者与执行者的协同能力;进而使规划者具备测试时持续演进能力,在推理过程中实现不中断推理的实时更新;同时建立参数化与非参数化记忆的双向转换循环,实现高效记忆演进;最后引入反思机制与无监督判断机制,增强开放环境下的推理与自我演进能力。在十一个基准测试上的大量实验证明了MIA的卓越性能。
English
Deep research agents (DRAs) integrate LLM reasoning with external tools. Memory systems enable DRAs to leverage historical experiences, which are essential for efficient reasoning and autonomous evolution. Existing methods rely on retrieving similar trajectories from memory to aid reasoning, while suffering from key limitations of ineffective memory evolution and increasing storage and retrieval costs. To address these problems, we propose a novel Memory Intelligence Agent (MIA) framework, consisting of a Manager-Planner-Executor architecture. Memory Manager is a non-parametric memory system that can store compressed historical search trajectories. Planner is a parametric memory agent that can produce search plans for questions. Executor is another agent that can search and analyze information guided by the search plan. To build the MIA framework, we first adopt an alternating reinforcement learning paradigm to enhance cooperation between the Planner and the Executor. Furthermore, we enable the Planner to continuously evolve during test-time learning, with updates performed on-the-fly alongside inference without interrupting the reasoning process. Additionally, we establish a bidirectional conversion loop between parametric and non-parametric memories to achieve efficient memory evolution. Finally, we incorporate a reflection and an unsupervised judgment mechanisms to boost reasoning and self-evolution in the open world. Extensive experiments across eleven benchmarks demonstrate the superiority of MIA.