规模并非总是越大越好:潜在扩散模型的缩放特性
Bigger is not Always Better: Scaling Properties of Latent Diffusion Models
April 1, 2024
作者: Kangfu Mei, Zhengzhong Tu, Mauricio Delbracio, Hossein Talebi, Vishal M. Patel, Peyman Milanfar
cs.AI
摘要
我们研究了潜在扩散模型(LDMs)的缩放特性,特别关注其采样效率。尽管改进的网络架构和推理算法已显示出能有效提升扩散模型的采样效率,但模型规模——这一决定采样效率的关键因素——尚未得到充分探讨。通过对已建立的文本到图像扩散模型进行实证分析,我们深入探究了模型规模如何影响不同采样步数下的采样效率。我们的研究发现了一个令人惊讶的趋势:在给定的推理预算下,较小的模型往往在生成高质量结果方面优于其较大的对应模型。此外,我们通过应用各种扩散采样器、探索不同的下游任务、评估后蒸馏模型以及与训练计算性能进行比较,展示了这些发现的可推广性。这些发现为LDM缩放策略的开发开辟了新途径,这些策略可以在有限的推理预算内提升生成能力。
English
We study the scaling properties of latent diffusion models (LDMs) with an
emphasis on their sampling efficiency. While improved network architecture and
inference algorithms have shown to effectively boost sampling efficiency of
diffusion models, the role of model size -- a critical determinant of sampling
efficiency -- has not been thoroughly examined. Through empirical analysis of
established text-to-image diffusion models, we conduct an in-depth
investigation into how model size influences sampling efficiency across varying
sampling steps. Our findings unveil a surprising trend: when operating under a
given inference budget, smaller models frequently outperform their larger
equivalents in generating high-quality results. Moreover, we extend our study
to demonstrate the generalizability of the these findings by applying various
diffusion samplers, exploring diverse downstream tasks, evaluating
post-distilled models, as well as comparing performance relative to training
compute. These findings open up new pathways for the development of LDM scaling
strategies which can be employed to enhance generative capabilities within
limited inference budgets.Summary
AI-Generated Summary