SpargeAttention2:基於混合Top-k+Top-p遮罩與蒸餾微調的可訓練稀疏注意力機制
SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning
February 13, 2026
作者: Jintao Zhang, Kai Jiang, Chendong Xiang, Weiqi Feng, Yuezhou Hu, Haocheng Xi, Jianfei Chen, Jun Zhu
cs.AI
摘要
許多無需訓練的稀疏注意力方法能有效加速擴散模型。近期研究指出,使稀疏注意力具備可訓練性可進一步提升稀疏度,同時保持生成品質。我們探討三個關鍵問題:(1) 兩種常見遮罩規則(Top-k與Top-p)何時失效?如何避免這些問題?(2) 為何可訓練稀疏注意力能比無訓練方法達到更高稀疏度?(3) 使用擴散損失微調稀疏注意力存在哪些侷限?如何解決?基於此分析,我們提出SpargeAttention2——一種可訓練稀疏注意力方法,能在不降低生成品質的前提下實現高稀疏度。該方法包含三大要素:(i) 融合Top-k與Top-p的混合遮罩規則,確保高稀疏度下的遮罩穩健性;(ii) 高效的可訓練稀疏注意力實現方案;(iii) 受蒸餾啟發的微調目標函數,通過稀疏注意力微調時更佳地保持生成品質。在視頻擴散模型上的實驗表明,SpargeAttention2可實現95%的注意力稀疏度與16.2倍的注意力加速,且生成品質穩定,一致優於現有稀疏注意力方法。
English
Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.