ChatPaper.aiChatPaper

區域自適應取樣用於擴散Transformer

Region-Adaptive Sampling for Diffusion Transformers

February 14, 2025
作者: Ziming Liu, Yifan Yang, Chengruidong Zhang, Yiqi Zhang, Lili Qiu, Yang You, Yuqing Yang
cs.AI

摘要

擴散模型(DMs)已成為跨不同領域生成任務的首選。然而,它們對多個連續前向傳遞的依賴顯著限制了實時性能。先前的加速方法主要集中於減少採樣步驟的數量或重複使用中間結果,未能利用圖像內空間區域的變化,這是由於卷積 U-Net 結構的限制。通過利用擴散變形器(DiTs)在處理可變數量的標記方面的靈活性,我們引入了 RAS,一種新穎的、無需訓練的採樣策略,根據 DiT 模型的焦點動態分配不同的採樣比率給圖像內的區域。我們的關鍵觀察是,在每個採樣步驟中,模型專注於語義上有意義的區域,這些焦點區域在連續步驟中表現出強烈的連續性。利用這一洞察,RAS僅更新當前焦點區域,而其他區域則使用前一步驟的緩存噪聲進行更新。模型的焦點是基於前一步驟的輸出來確定的,利用我們觀察到的時間一致性。我們在 Stable Diffusion 3 和 Lumina-Next-T2I 上評估了 RAS,分別實現了高達 2.36x 和 2.51x 的加速,並在生成質量上幾乎沒有降低。此外,用戶研究顯示,在人類評估下,RAS 提供了可比擬的質量,同時實現了 1.6x 的加速。我們的方法在更高效的擴散變形器方面邁出了重要一步,增強了它們在實時應用中的潛力。
English
Diffusion models (DMs) have become the leading choice for generative tasks across diverse domains. However, their reliance on multiple sequential forward passes significantly limits real-time performance. Previous acceleration methods have primarily focused on reducing the number of sampling steps or reusing intermediate results, failing to leverage variations across spatial regions within the image due to the constraints of convolutional U-Net structures. By harnessing the flexibility of Diffusion Transformers (DiTs) in handling variable number of tokens, we introduce RAS, a novel, training-free sampling strategy that dynamically assigns different sampling ratios to regions within an image based on the focus of the DiT model. Our key observation is that during each sampling step, the model concentrates on semantically meaningful regions, and these areas of focus exhibit strong continuity across consecutive steps. Leveraging this insight, RAS updates only the regions currently in focus, while other regions are updated using cached noise from the previous step. The model's focus is determined based on the output from the preceding step, capitalizing on the temporal consistency we observed. We evaluate RAS on Stable Diffusion 3 and Lumina-Next-T2I, achieving speedups up to 2.36x and 2.51x, respectively, with minimal degradation in generation quality. Additionally, a user study reveals that RAS delivers comparable qualities under human evaluation while achieving a 1.6x speedup. Our approach makes a significant step towards more efficient diffusion transformers, enhancing their potential for real-time applications.

Summary

AI-Generated Summary

PDF543February 17, 2025