ChatPaper.aiChatPaper

MHLA:通过令牌级多头机制恢复线性注意力的表达能力

MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head

January 12, 2026
作者: Kewei Zhang, Ye Huang, Yufan Deng, Jincheng Yu, Junsong Chen, Huan Ling, Enze Xie, Daquan Zhou
cs.AI

摘要

尽管Transformer架构在众多领域占据主导地位,但其二次自注意力复杂度阻碍了其在大规模应用中的使用。线性注意力提供了一种高效替代方案,但直接应用往往会导致性能下降,现有修正方法通常通过引入额外模块(如深度可分离卷积)重新带来计算开销,违背了初衷。本文发现这些方法存在一个关键失效模式:全局上下文坍缩,即模型丧失表征多样性。为此,我们提出多头线性注意力(MHLA),通过在词元维度上划分注意力头进行计算来保持这种多样性。我们证明MHLA在维持线性复杂度的同时,能够恢复softmax注意力的大部分表达能力,并在多个领域验证其有效性:在相同时间复杂度下,ImageNet分类任务提升3.6%,自然语言处理任务提升6.3%,图像生成任务提升12.6%,视频生成任务提升41%。
English
While the Transformer architecture dominates many fields, its quadratic self-attention complexity hinders its use in large-scale applications. Linear attention offers an efficient alternative, but its direct application often degrades performance, with existing fixes typically re-introducing computational overhead through extra modules (e.g., depthwise separable convolution) that defeat the original purpose. In this work, we identify a key failure mode in these methods: global context collapse, where the model loses representational diversity. To address this, we propose Multi-Head Linear Attention (MHLA), which preserves this diversity by computing attention within divided heads along the token dimension. We prove that MHLA maintains linear complexity while recovering much of the expressive power of softmax attention, and verify its effectiveness across multiple domains, achieving a 3.6\% improvement on ImageNet classification, a 6.3\% gain on NLP, a 12.6\% improvement on image generation, and a 41\% enhancement on video generation under the same time complexity.
PDF513January 31, 2026