InfLLM-V2:密集-稀疏可切换注意力机制实现短至长序列的无缝适应
InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation
September 29, 2025
作者: Weilin Zhao, Zihan Zhou, Zhou Su, Chaojun Xiao, Yuxuan Li, Yanghao Li, Yudi Zhang, Weilun Zhao, Zhen Li, Yuxiang Huang, Ao Sun, Xu Han, Zhiyuan Liu
cs.AI
摘要
长序列处理是现代大型语言模型的关键能力。然而,标准Transformer架构中的自注意力机制在处理长序列时面临严重的计算和内存瓶颈。尽管可训练的稀疏注意力方法提供了一个有前景的解决方案,但现有方法如NSA引入了过多的额外参数,并破坏了传统的“短序列预训练,长序列微调”工作流程,导致收敛速度慢且难以加速。为了克服这些限制,我们引入了密集-稀疏可切换注意力框架,称为InfLLM-V2。InfLLM-V2是一种可训练的稀疏注意力机制,能够无缝地将模型从短序列适应到长序列。具体而言,InfLLM-V2通过无参数的架构修改重用密集注意力参数,保持短序列和长序列处理之间的一致性。此外,InfLLM-V2通过使用密集注意力处理短输入并平滑过渡到稀疏注意力处理长序列,确保在所有序列长度上的计算效率。为了实现实际加速,我们进一步引入了InfLLM-V2的高效实现,显著减少了计算开销。我们在长上下文理解和链式推理上的实验表明,InfLLM-V2比密集注意力快4倍,同时分别保留了98.1%和99.7%的性能。基于InfLLM-V2框架,我们训练并开源了混合推理模型MiniCPM4.1(https://huggingface.co/openbmb/MiniCPM4.1-8B),为研究社区提供了一个可复现的实现。
English
Long-sequence processing is a critical capability for modern large language
models. However, the self-attention mechanism in the standard Transformer
architecture faces severe computational and memory bottlenecks when processing
long sequences. While trainable sparse attention methods offer a promising
solution, existing approaches such as NSA introduce excessive extra parameters
and disrupt the conventional pretrain-on-short, finetune-on-long
workflow, resulting in slow convergence and difficulty in acceleration. To
overcome these limitations, we introduce dense-sparse switchable attention
framework, termed as InfLLM-V2. InfLLM-V2 is a trainable sparse attention that
seamlessly adapts models from short to long sequences. Specifically, InfLLM-V2
reuses dense attention parameters through parameter-free architecture
modification, maintaining consistency between short and long sequence
processing. Additionally, InfLLM-V2 ensures computational efficiency across all
sequence lengths, by using dense attention for short inputs and smoothly
transitioning to sparse attention for long sequences. To achieve practical
acceleration, we further introduce an efficient implementation of InfLLM-V2
that significantly reduces the computational overhead. Our experiments on
long-context understanding and chain-of-thought reasoning demonstrate that
InfLLM-V2 is 4times faster than dense attention while retaining 98.1% and
99.7% of the performance, respectively. Based on the InfLLM-V2 framework, we
have trained and open-sourced MiniCPM4.1
(https://huggingface.co/openbmb/MiniCPM4.1-8B), a hybrid reasoning model,
providing a reproducible implementation for the research community.