FlexiDiT:您的扩散变压器能够轻松生成高质量样本,且计算需求更低
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
February 27, 2025
作者: Sotiris Anagnostidis, Gregor Bachmann, Yeongmin Kim, Jonas Kohler, Markos Georgopoulos, Artsiom Sanakoyeu, Yuming Du, Albert Pumarola, Ali Thabet, Edgar Schönfeld
cs.AI
摘要
尽管现代扩散变换器展现出卓越的性能,但其在推理阶段面临巨大的资源需求挑战,这源于每个去噪步骤所需的固定且庞大的计算量。在本文中,我们重新审视了传统上为每次去噪迭代分配固定计算预算的静态范式,转而提出了一种动态策略。我们这一简单且样本高效的框架,使得预训练的扩散变换器模型能够转化为灵活版本——称为FlexiDiT——使其能够在不同的计算预算下处理输入。我们展示了单个灵活模型如何在生成图像时不降低质量,同时相较于静态模型,在类别条件及文本条件图像生成任务中减少超过40%的浮点运算需求。我们的方法具有通用性,且不受输入和条件模式的限制。我们还展示了如何将这一方法轻松扩展至视频生成领域,其中FlexiDiT模型在保持性能不变的前提下,生成样本所需计算量最多可减少75%。
English
Despite their remarkable performance, modern Diffusion Transformers are
hindered by substantial resource requirements during inference, stemming from
the fixed and large amount of compute needed for each denoising step. In this
work, we revisit the conventional static paradigm that allocates a fixed
compute budget per denoising iteration and propose a dynamic strategy instead.
Our simple and sample-efficient framework enables pre-trained DiT models to be
converted into flexible ones -- dubbed FlexiDiT -- allowing them to
process inputs at varying compute budgets. We demonstrate how a single
flexible model can generate images without any drop in quality, while
reducing the required FLOPs by more than 40\% compared to their static
counterparts, for both class-conditioned and text-conditioned image generation.
Our method is general and agnostic to input and conditioning modalities. We
show how our approach can be readily extended for video generation, where
FlexiDiT models generate samples with up to 75\% less compute without
compromising performance.Summary
AI-Generated Summary