ChatPaper.aiChatPaper

原生混合注意力機制:高效序列建模的新方法

Native Hybrid Attention for Efficient Sequence Modeling

October 8, 2025
作者: Jusen Du, Jiaxi Hu, Tao Zhang, Weigao Sun, Yu Cheng
cs.AI

摘要

Transformer在序列建模方面表現卓越,但面臨二次方複雜度的挑戰,而線性注意力雖提升了效率,卻常在長上下文情境下犧牲召回準確率。本研究提出了一種新穎的混合架構——原生混合注意力(NHA),它將線性注意力與全注意力相結合,並將層內與層間混合集成於統一的層設計中。NHA通過線性RNN更新鍵值槽來維持長期上下文,並利用滑動窗口中的短期令牌進行增強。隨後,對所有鍵值應用單一的softmax注意力操作,實現了無需額外融合參數的逐令牌與逐頭部上下文依賴權重分配。層間行為通過單一超參數——滑動窗口大小來控制,這使得在保持所有層結構統一的同時,能夠在純線性與全注意力之間平滑調整。實驗結果表明,NHA在召回密集型與常識推理任務上超越了Transformer及其他混合基線模型。此外,預訓練的大型語言模型(LLM)可與NHA進行結構性混合,在保持競爭力準確率的同時,顯著提升效率。代碼已開源於https://github.com/JusenD/NHA。
English
Transformers excel at sequence modeling but face quadratic complexity, while linear attention offers improved efficiency but often compromises recall accuracy over long contexts. In this work, we introduce Native Hybrid Attention (NHA), a novel hybrid architecture of linear and full attention that integrates both intra \& inter-layer hybridization into a unified layer design. NHA maintains long-term context in key-value slots updated by a linear RNN, and augments them with short-term tokens from a sliding window. A single softmax attention operation is then applied over all keys and values, enabling per-token and per-head context-dependent weighting without requiring additional fusion parameters. The inter-layer behavior is controlled through a single hyperparameter, the sliding window size, which allows smooth adjustment between purely linear and full attention while keeping all layers structurally uniform. Experimental results show that NHA surpasses Transformers and other hybrid baselines on recall-intensive and commonsense reasoning tasks. Furthermore, pretrained LLMs can be structurally hybridized with NHA, achieving competitive accuracy while delivering significant efficiency gains. Code is available at https://github.com/JusenD/NHA.
PDF162October 9, 2025