基于交错令牌选择的高效长上下文推理:稀疏注意力机制
Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
February 3, 2026
作者: Dongwon Jo, Beomseok Kang, Jiwon Song, Jae-Joon Kim
cs.AI
摘要
注意力机制的二次复杂度始终是大型语言模型长上下文推理的核心瓶颈。现有加速方法要么采用结构化模式对注意力图进行稀疏化,要么在特定层级永久淘汰部分词元,这些方法可能保留无关词元或依赖不可逆的早期决策,未能充分考虑词元重要性在层级和注意力头间的动态变化。本文提出词元稀疏注意力机制,这是一种轻量级动态词元稀疏化方法,在注意力计算过程中将每个注意力头的Q、K、V压缩至精简词元集,随后将输出解压缩回原始序列,使得词元信息能在后续层级中被重新评估。该机制在词元选择与稀疏注意力的交叉领域开辟了新的设计维度。我们的方法完全兼容包括Flash Attention在内的稠密注意力实现方案,并能与现有稀疏注意力内核无缝集成。实验结果表明,词元稀疏注意力能持续优化精度-延迟权衡,在128K上下文长度下实现最高3.23倍的注意力加速,且精度损失小于1%。这些结果证明,动态交错式的词元级稀疏化是扩展长上下文推理能力的一种互补且有效的策略。
English
The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.