ChatPaper.aiChatPaper

基于端到端强化学习的压缩记忆动态长上下文推理

Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning

February 9, 2026
作者: Zhuoen Chen, Dongfang Li, Meishan Zhang, Baotian Hu, Min Zhang
cs.AI

摘要

大型语言模型(LLM)在处理长上下文时面临显著挑战,包括二次计算成本、信息遗忘以及检索增强生成(RAG)固有的上下文碎片化问题。我们提出一种受认知启发的长上下文高效推理框架,其核心在于分块压缩与选择性记忆回溯,而非处理所有原始标记。该框架将长输入分割为文本块,通过习得的压缩器将每个块编码为压缩记忆表征。门控模块动态选择相关记忆块,随后由推理模块结合动态演化的即时记忆进行迭代处理以解决下游任务。压缩器与推理器通过端到端强化学习联合优化,而门控模块则作为分类器单独训练。实验结果表明:该方法在RULER-HQA等多跳推理基准测试中达到具有竞争力的准确率,上下文长度外推能力从7K标记扩展至175万标记,且相较于强长上下文基线模型展现出更优的准确率-效率平衡。特别值得注意的是,其峰值GPU内存占用最高可降低至MemAgent的1/2,推理速度提升达6倍。
English
Large Language Models (LLMs) face significant challenges in long-context processing, including quadratic computational costs, information forgetting, and the context fragmentation inherent in retrieval-augmented generation (RAG). We propose a cognitively inspired framework for efficient long-context inference based on chunk-wise compression and selective memory recall, rather than processing all raw tokens. The framework segments long inputs into chunks and encodes each chunk into compressed memory representations using a learned compressor. A gating module dynamically selects relevant memory blocks, which are then iteratively processed by a reasoning module with an evolving working memory to solve downstream tasks. The compressor and reasoner are jointly optimized via end-to-end reinforcement learning, while the gating module is trained separately as a classifier. Experimental results show that the proposed method achieves competitive accuracy on multi-hop reasoning benchmarks such as RULER-HQA, extrapolates context length from 7K to 1.75M tokens, and offers a favorable accuracy-efficiency trade-off compared to strong long-context baselines. In particular, it achieves up to a 2 times reduction in peak GPU memory usage and a 6 times inference speedup over MemAgent.
PDF91February 12, 2026