ChatPaper.aiChatPaper

令牌稀疏注意力:通过交错令牌选择实现高效长上下文推理

Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

February 3, 2026
作者: Dongwon Jo, Beomseok Kang, Jiwon Song, Jae-Joon Kim
cs.AI

摘要

注意力机制的二次复杂度一直是制约大语言模型长上下文推理的核心瓶颈。现有加速方法要么采用结构化模式对注意力图进行稀疏化处理,要么在特定层级永久淘汰部分词元,这些方法可能保留无关词元或依赖不可逆的早期决策,未能充分考虑词元重要性在不同层级和注意力头间的动态特性。本文提出词元稀疏注意力机制,这是一种轻量级动态词元级稀疏化方法:在注意力计算过程中将每个注意力头的Q、K、V压缩至精简词元集合,随后将输出解压缩回原始序列,使得词元信息能在后续层级被重新评估。该机制在词元选择与稀疏注意力的交叉领域开辟了新的设计维度。我们的方案与稠密注意力实现(包括Flash Attention)完全兼容,并能与现有稀疏注意力内核无缝集成。实验结果表明,词元稀疏注意力能持续优化准确率与延迟的权衡关系,在128K上下文长度下实现最高3.23倍的注意力加速,且准确率损失不足1%。这些发现证明,动态交错式的词元级稀疏化是实现可扩展长上下文推理的有效互补策略。
English
The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.
PDF91February 5, 2026