溢出預防機制強化長上下文循環大語言模型
Overflow Prevention Enhances Long-Context Recurrent LLMs
May 12, 2025
作者: Assaf Ben-Kish, Itamar Zimerman, M. Jehanzeb Mirza, James Glass, Leonid Karlinsky, Raja Giryes
cs.AI
摘要
近期,大型语言模型(LLMs)领域的一个趋势是开发具有次二次复杂度的循环模型,以提高长上下文处理的效率。我们研究了领先的长上下文大模型,重点关注其固定大小的循环记忆如何影响其性能。我们的实验表明,即使这些模型在扩展上下文中进行了训练,它们对长上下文的使用仍然不足。具体而言,我们展示了一种基于分块的推理过程,该过程仅识别并处理输入中最相关的部分,能够缓解循环记忆失效问题,并在许多长上下文任务中表现出色:在LongBench基准测试中,我们的方法使Falcon3-Mamba-Inst-7B的整体性能提升了14%,Falcon-Mamba-Inst-7B提升了28%,RecurrentGemma-IT-9B提升了50%,RWKV6-Finch-7B提升了51%。令人惊讶的是,这种简单的方法在具有挑战性的LongBench v2基准测试中也取得了最先进的结果,显示出与同等规模Transformer模型相媲美的性能。此外,我们的发现引发了对循环模型是否真正利用了长距离依赖关系的质疑,因为我们的单分块策略在即使需要跨上下文关系的任务中也表现出了更强的性能。
English
A recent trend in LLMs is developing recurrent sub-quadratic models that
improve long-context processing efficiency. We investigate leading large
long-context models, focusing on how their fixed-size recurrent memory affects
their performance. Our experiments reveal that, even when these models are
trained for extended contexts, their use of long contexts remains
underutilized. Specifically, we demonstrate that a chunk-based inference
procedure, which identifies and processes only the most relevant portion of the
input can mitigate recurrent memory failures and be effective for many
long-context tasks: On LongBench, our method improves the overall performance
of Falcon3-Mamba-Inst-7B by 14%, Falcon-Mamba-Inst-7B by 28%,
RecurrentGemma-IT-9B by 50%, and RWKV6-Finch-7B by 51%. Surprisingly, this
simple approach also leads to state-of-the-art results in the challenging
LongBench v2 benchmark, showing competitive performance with equivalent size
Transformers. Furthermore, our findings raise questions about whether recurrent
models genuinely exploit long-range dependencies, as our single-chunk strategy
delivers stronger performance - even in tasks that presumably require
cross-context relations.Summary
AI-Generated Summary