ChatPaper.aiChatPaper

SageAttention2++:SageAttention2 的高效实现版本

SageAttention2++: A More Efficient Implementation of SageAttention2

May 27, 2025
作者: Jintao Zhang, Xiaoming Xu, Jia Wei, Haofeng Huang, Pengle Zhang, Chendong Xiang, Jun Zhu, Jianfei Chen
cs.AI

摘要

注意力机制的效率至关重要,因为其时间复杂度随序列长度呈二次方增长。SageAttention2通过量化加速注意力中的矩阵乘法(Matmul)来解决这一问题。为进一步提升SageAttention2的速度,我们提出利用FP8矩阵乘法并以FP16累加的更快指令。该指令比SageAttention2中使用的FP8矩阵乘法快2倍。实验表明,SageAttention2++在保持与SageAttention2相同注意力精度的同时,相比FlashAttention实现了3.9倍的加速。这意味着SageAttention2++能有效加速包括语言、图像和视频生成在内的多种模型,且端到端指标损失可忽略不计。代码将在https://github.com/thu-ml/SageAttention 提供。
English
The efficiency of attention is critical because its time complexity grows quadratically with sequence length. SageAttention2 addresses this by utilizing quantization to accelerate matrix multiplications (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8 Matmul used in SageAttention2. Our experiments show that SageAttention2++ achieves a 3.9x speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. This means SageAttention2++ effectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss. The code will be available at https://github.com/thu-ml/SageAttention.

Summary

AI-Generated Summary

PDF352May 29, 2025