ChatPaper.aiChatPaper

鐘擺擴散:模型步驟蒸餾實現高效生成

Clockwork Diffusion: Efficient Generation With Model-Step Distillation

December 13, 2023
作者: Amirhossein Habibian, Amir Ghodrati, Noor Fathima, Guillaume Sautiere, Risheek Garrepalli, Fatih Porikli, Jens Petersen
cs.AI

摘要

本研究旨在提高文本到圖像擴散模型的效率。儘管擴散模型在每個生成步驟中使用計算昂貴的基於 UNet 的去噪操作,我們確定並非所有操作對最終輸出質量同等重要。特別是,我們觀察到在高分辨率特徵圖上運行的 UNet 層對微小干擾相對敏感。相反,低分辨率特徵圖影響最終圖像的語義佈局,通常可以在不會引起輸出明顯變化的情況下進行干擾。基於這一觀察,我們提出了時鐘式擴散方法,定期重複利用先前去噪步驟的計算,以在一個或多個後續步驟中近似低分辨率特徵圖。對於多個基準線,無論是文本到圖像生成還是圖像編輯,我們展示時鐘式擴散在極大降低計算複雜度的情況下,可達到相當或更好的感知分數。例如,對於具有 8 個 DPM++ 步驟的 Stable Diffusion v1.5,我們節省了 32% 的 FLOPs,而 FID 和 CLIP 變化微乎其微。
English
This work aims to improve the efficiency of text-to-image diffusion models. While diffusion models use computationally expensive UNet-based denoising operations in every generation step, we identify that not all operations are equally relevant for the final output quality. In particular, we observe that UNet layers operating on high-res feature maps are relatively sensitive to small perturbations. In contrast, low-res feature maps influence the semantic layout of the final image and can often be perturbed with no noticeable change in the output. Based on this observation, we propose Clockwork Diffusion, a method that periodically reuses computation from preceding denoising steps to approximate low-res feature maps at one or more subsequent steps. For multiple baselines, and for both text-to-image generation and image editing, we demonstrate that Clockwork leads to comparable or improved perceptual scores with drastically reduced computational complexity. As an example, for Stable Diffusion v1.5 with 8 DPM++ steps we save 32% of FLOPs with negligible FID and CLIP change.
PDF150December 15, 2024