DiTFastAttn:擴散變壓器模型的注意力壓縮
DiTFastAttn: Attention Compression for Diffusion Transformer Models
June 12, 2024
作者: Zhihang Yuan, Pu Lu, Hanling Zhang, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, Yu Wang
cs.AI
摘要
擴散Transformer(DiT)在圖像和視頻生成方面表現出色,但由於自注意力的二次複雜度而面臨計算挑戰。我們提出了DiTFastAttn,這是一種新穎的後訓練壓縮方法,用於緩解DiT的計算瓶頸。我們在DiT推斷期間識別了注意力計算中的三個關鍵冗余:1. 空間冗余,其中許多注意力頭專注於本地信息;2. 時間冗余,相鄰步驟的注意力輸出之間存在高相似性;3. 條件冗余,條件和無條件推斷呈現顯著相似性。為應對這些冗余,我們提出了三種技術:1. 帶有剩餘緩存的窗口注意力以減少空間冗余;2. 時間相似性降低以利用步驟之間的相似性;3. 消除條件冗余以在條件生成期間跳過冗余計算。為了展示DiTFastAttn的有效性,我們將其應用於DiT、PixArt-Sigma進行圖像生成任務,並應用於OpenSora進行視頻生成任務。評估結果顯示,對於圖像生成,我們的方法可以減少高達88%的FLOPs,並在高分辨率生成時實現高達1.6倍的加速。
English
Diffusion Transformers (DiT) excel at image and video generation but face
computational challenges due to self-attention's quadratic complexity. We
propose DiTFastAttn, a novel post-training compression method to alleviate
DiT's computational bottleneck. We identify three key redundancies in the
attention computation during DiT inference: 1. spatial redundancy, where many
attention heads focus on local information; 2. temporal redundancy, with high
similarity between neighboring steps' attention outputs; 3. conditional
redundancy, where conditional and unconditional inferences exhibit significant
similarity. To tackle these redundancies, we propose three techniques: 1.
Window Attention with Residual Caching to reduce spatial redundancy; 2.
Temporal Similarity Reduction to exploit the similarity between steps; 3.
Conditional Redundancy Elimination to skip redundant computations during
conditional generation. To demonstrate the effectiveness of DiTFastAttn, we
apply it to DiT, PixArt-Sigma for image generation tasks, and OpenSora for
video generation tasks. Evaluation results show that for image generation, our
method reduces up to 88\% of the FLOPs and achieves up to 1.6x speedup at high
resolution generation.Summary
AI-Generated Summary