SageAttention2++:SageAttention2 的更高效實現
SageAttention2++: A More Efficient Implementation of SageAttention2
May 27, 2025
作者: Jintao Zhang, Xiaoming Xu, Jia Wei, Haofeng Huang, Pengle Zhang, Chendong Xiang, Jun Zhu, Jianfei Chen
cs.AI
摘要
注意力機制的效率至關重要,因為其時間複雜度隨序列長度呈二次方增長。SageAttention2 通過利用量化技術來加速注意力中的矩陣乘法(Matmul)來解決這一問題。為了進一步加速 SageAttention2,我們提出利用 FP8 Matmul 累積到 FP16 的更快指令。該指令比 SageAttention2 中使用的 FP8 Matmul 快 2 倍。我們的實驗表明,SageAttention2++ 在保持與 SageAttention2 相同注意力精度的同時,比 FlashAttention 實現了 3.9 倍的加速。這意味著 SageAttention2++ 能夠有效加速包括語言、圖像和視頻生成在內的各種模型,且端到端指標損失可忽略不計。代碼將在 https://github.com/thu-ml/SageAttention 提供。
English
The efficiency of attention is critical because its time complexity grows
quadratically with sequence length. SageAttention2 addresses this by utilizing
quantization to accelerate matrix multiplications (Matmul) in attention. To
further accelerate SageAttention2, we propose to utilize the faster instruction
of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8
Matmul used in SageAttention2. Our experiments show that SageAttention2++
achieves a 3.9x speedup over FlashAttention while maintaining the same
attention accuracy as SageAttention2. This means SageAttention2++ effectively
accelerates various models, including those for language, image, and video
generation, with negligible end-to-end metrics loss. The code will be available
at https://github.com/thu-ml/SageAttention.Summary
AI-Generated Summary