DiTFastAttn:扩散变压器模型的注意力压缩
DiTFastAttn: Attention Compression for Diffusion Transformer Models
June 12, 2024
作者: Zhihang Yuan, Pu Lu, Hanling Zhang, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, Yu Wang
cs.AI
摘要
扩散Transformer(DiT)在图像和视频生成方面表现出色,但由于自注意力的二次复杂度而面临计算挑战。我们提出了DiTFastAttn,一种新颖的后训练压缩方法,以缓解DiT的计算瓶颈。我们在DiT推断过程中确定了注意力计算中的三个关键冗余:1. 空间冗余,即许多注意力头集中在局部信息上;2. 时间冗余,即相邻步骤的注意力输出之间存在高相似性;3. 条件冗余,即条件和无条件推断表现出显著相似性。为了解决这些冗余,我们提出了三种技术:1. 带有残差缓存的窗口注意力,以减少空间冗余;2. 时间相似性降低,以利用步骤之间的相似性;3. 条件冗余消除,在条件生成过程中跳过冗余计算。为了展示DiTFastAttn的有效性,我们将其应用于DiT、PixArt-Sigma用于图像生成任务,以及OpenSora用于视频生成任务。评估结果显示,在图像生成方面,我们的方法可以减少高达88%的FLOPs,并在高分辨率生成时实现高达1.6倍的加速。
English
Diffusion Transformers (DiT) excel at image and video generation but face
computational challenges due to self-attention's quadratic complexity. We
propose DiTFastAttn, a novel post-training compression method to alleviate
DiT's computational bottleneck. We identify three key redundancies in the
attention computation during DiT inference: 1. spatial redundancy, where many
attention heads focus on local information; 2. temporal redundancy, with high
similarity between neighboring steps' attention outputs; 3. conditional
redundancy, where conditional and unconditional inferences exhibit significant
similarity. To tackle these redundancies, we propose three techniques: 1.
Window Attention with Residual Caching to reduce spatial redundancy; 2.
Temporal Similarity Reduction to exploit the similarity between steps; 3.
Conditional Redundancy Elimination to skip redundant computations during
conditional generation. To demonstrate the effectiveness of DiTFastAttn, we
apply it to DiT, PixArt-Sigma for image generation tasks, and OpenSora for
video generation tasks. Evaluation results show that for image generation, our
method reduces up to 88\% of the FLOPs and achieves up to 1.6x speedup at high
resolution generation.Summary
AI-Generated Summary