人工海马网络:高效长上下文建模
Artificial Hippocampus Networks for Efficient Long-Context Modeling
October 8, 2025
作者: Yunhao Fang, Weihao Yu, Shu Zhong, Qinghao Ye, Xuehan Xiong, Lai Wei
cs.AI
摘要
长序列建模面临着一个根本性的权衡:一方面,类似RNN的模型通过固定大小的压缩内存实现高效处理;另一方面,基于注意力机制的Transformer模型则通过无损增长的内存保持高保真度。受认知科学中多存储模型的启发,我们提出了一种人工神经网络的内存框架。该方法将Transformer的KV缓存作为无损短期记忆的滑动窗口,同时通过一个称为人工海马网络(AHN)的可学习模块,将窗口外的信息递归压缩为固定大小的紧凑长期记忆。为验证这一框架,我们采用现代RNN架构(包括Mamba2、DeltaNet和门控DeltaNet)实例化了AHN。在长上下文基准测试LV-Eval和InfiniteBench上的大量实验表明,增强AHN的模型始终优于滑动窗口基线,并实现了与全注意力模型相当甚至更优的性能,同时大幅降低了计算和内存需求。例如,在Qwen2.5-3B-Instruct中引入AHN,推理FLOPs减少了40.5%,内存缓存减少了74.0%,同时其在LV-Eval(128k序列长度)上的平均得分从4.41提升至5.88。代码已开源:https://github.com/ByteDance-Seed/AHN。
English
Long-sequence modeling faces a fundamental trade-off between the efficiency
of compressive fixed-size memory in RNN-like models and the fidelity of
lossless growing memory in attention-based Transformers. Inspired by the
Multi-Store Model in cognitive science, we introduce a memory framework of
artificial neural networks. Our method maintains a sliding window of the
Transformer's KV cache as lossless short-term memory, while a learnable module
termed Artificial Hippocampus Network (AHN) recurrently compresses
out-of-window information into a fixed-size compact long-term memory. To
validate this framework, we instantiate AHNs using modern RNN-like
architectures, including Mamba2, DeltaNet, and Gated DeltaNet. Extensive
experiments on long-context benchmarks LV-Eval and InfiniteBench demonstrate
that AHN-augmented models consistently outperform sliding window baselines and
achieve performance comparable or even superior to full-attention models, while
substantially reducing computational and memory requirements. For instance,
augmenting the Qwen2.5-3B-Instruct with AHNs reduces inference FLOPs by 40.5%
and memory cache by 74.0%, while improving its average score on LV-Eval (128k
sequence length) from 4.41 to 5.88. Code is available at:
https://github.com/ByteDance-Seed/AHN.