ChatPaper.aiChatPaper

PLADIS:利用稀疏性在推理時突破擴散模型注意力機制的極限

PLADIS: Pushing the Limits of Attention in Diffusion Models at Inference Time by Leveraging Sparsity

March 10, 2025
作者: Kwanyoung Kim, Byeongsu Sim
cs.AI

摘要

擴散模型在使用如無分類器引導(CFG)等引導技術生成高質量條件樣本方面已展現出令人印象深刻的成果。然而,現有方法通常需要額外的訓練或神經函數評估(NFEs),這使得它們與引導蒸餾模型不相容。此外,這些方法依賴於需要識別目標層的啟發式方法。在本研究中,我們提出了一種新穎且高效的方法,稱為PLADIS,該方法通過利用稀疏注意力來增強預訓練模型(U-Net/Transformer)。具體而言,我們在推理過程中,利用softmax及其稀疏對應物在交叉注意力層中推斷查詢-鍵相關性,而無需額外的訓練或NFEs。通過利用稀疏注意力的噪聲魯棒性,我們的PLADIS釋放了文本到圖像擴散模型的潛在能力,使它們在曾經難以應對的領域中展現出新的效能。它無縫整合了包括引導蒸餾模型在內的各種引導技術。大量實驗顯示,在文本對齊和人類偏好方面均有顯著提升,提供了一種高效且普遍適用的解決方案。
English
Diffusion models have shown impressive results in generating high-quality conditional samples using guidance techniques such as Classifier-Free Guidance (CFG). However, existing methods often require additional training or neural function evaluations (NFEs), making them incompatible with guidance-distilled models. Also, they rely on heuristic approaches that need identifying target layers. In this work, we propose a novel and efficient method, termed PLADIS, which boosts pre-trained models (U-Net/Transformer) by leveraging sparse attention. Specifically, we extrapolate query-key correlations using softmax and its sparse counterpart in the cross-attention layer during inference, without requiring extra training or NFEs. By leveraging the noise robustness of sparse attention, our PLADIS unleashes the latent potential of text-to-image diffusion models, enabling them to excel in areas where they once struggled with newfound effectiveness. It integrates seamlessly with guidance techniques, including guidance-distilled models. Extensive experiments show notable improvements in text alignment and human preference, offering a highly efficient and universally applicable solution.

Summary

AI-Generated Summary

PDF842March 17, 2025