ChatPaper.aiChatPaper

SpargeAttention2:通过混合Top-k+Top-p掩码与蒸馏微调实现可训练稀疏注意力

SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

February 13, 2026
作者: Jintao Zhang, Kai Jiang, Chendong Xiang, Weiqi Feng, Yuezhou Hu, Haocheng Xi, Jianfei Chen, Jun Zhu
cs.AI

摘要

许多无需训练即可实现的稀疏注意力方法能有效加速扩散模型。近期研究表明,使稀疏注意力具备可训练性可进一步提升稀疏度同时保持生成质量。我们深入探究了三个关键问题:(1) Top-k与Top-p这两种常用掩码规则何时失效,如何规避?(2) 可训练稀疏注意力为何能比无需训练方法达到更高稀疏度?(3) 基于扩散损失微调稀疏注意力存在哪些局限,如何解决?基于此分析,我们提出SpargeAttention2——一种可训练稀疏注意力方法,能在保持生成质量的前提下实现高稀疏度。该方法包含三大核心组件:(i) 融合Top-k与Top-p的混合掩码规则,确保高稀疏度下掩码鲁棒性;(ii) 高效的可训练稀疏注意力实现机制;(iii) 受蒸馏启发的微调目标函数,通过稀疏注意力微调更好地保持生成质量。在视频扩散模型上的实验表明,SpargeAttention2在维持生成质量的同时实现了95%的注意力稀疏度和16.2倍的注意力加速,持续超越现有稀疏注意力方法。
English
Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.
PDF244February 21, 2026