ChatPaper.aiChatPaper

棱镜:频谱感知的块稀疏注意力机制

Prism: Spectral-Aware Block-Sparse Attention

February 9, 2026
作者: Xinghao Wang, Pengyu Wang, Xiaoran Liu, Fangxu Liu, Jason Chu, Kai Song, Xipeng Qiu
cs.AI

摘要

块稀疏注意力机制在加速长上下文LLM预填充方面前景广阔,但如何高效识别相关块仍是瓶颈。现有方法通常采用粗粒度注意力作为块重要性估计的代理,却往往依赖昂贵的词元级搜索或评分,导致显著的选择开销。本文通过理论溯源发现,标准粗粒度注意力(经均值池化)的失准根源在于其与旋转位置编码(RoPE)的相互作用:均值池化作为低通滤波器,会在高频维度引发相消干涉,从而对局部位置信息(如斜线模式)形成"盲区"。为此,我们提出Prism——一种免训练的光谱感知方法,将块选择分解为高频与低频双分支。通过基于能量的温度校准,Prism直接从池化表征中恢复被衰减的位置信号,实现纯块级操作的重要性估计,从而提升效率。大量实验证实,Prism在保持与全注意力模型精度持平的同时,可获得最高5.1倍的加速比。
English
Block-sparse attention is promising for accelerating long-context LLM pre-filling, yet identifying relevant blocks efficiently remains a bottleneck. Existing methods typically employ coarse-grained attention as a proxy for block importance estimation, but often resort to expensive token-level searching or scoring, resulting in significant selection overhead. In this work, we trace the inaccuracy of standard coarse-grained attention via mean pooling to a theoretical root cause: the interaction between mean pooling and Rotary Positional Embeddings (RoPE). We prove that mean pooling acts as a low-pass filter that induces destructive interference in high-frequency dimensions, effectively creating a "blind spot" for local positional information (e.g., slash patterns). To address this, we introduce Prism, a training-free spectral-aware approach that decomposes block selection into high-frequency and low-frequency branches. By applying energy-based temperature calibration, Prism restores the attenuated positional signals directly from pooled representations, enabling block importance estimation using purely block-level operations, thereby improving efficiency. Extensive evaluations confirm that Prism maintains accuracy parity with full attention while delivering up to 5.1times speedup.
PDF312February 12, 2026