ChatPaper.aiChatPaper

关于基于扩散的文本到图像生成模型的可扩展性研究

On the Scalability of Diffusion-based Text-to-Image Generation

April 3, 2024
作者: Hao Li, Yang Zou, Ying Wang, Orchid Majumder, Yusheng Xie, R. Manmatha, Ashwin Swaminathan, Zhuowen Tu, Stefano Ermon, Stefano Soatto
cs.AI

摘要

扩大模型和数据规模在大型语言模型(LLMs)的演进中取得了显著成功。然而,基于扩散的文本到图像(T2I)模型的扩展规律尚未得到充分探索。如何在降低成本的同时高效扩展模型以提升性能,仍是一个未解之谜。不同的训练设置和昂贵的训练成本使得公平的模型比较变得极为困难。在本研究中,我们通过在扩展去噪骨干网络和训练集上进行广泛而严谨的消融实验,实证研究了基于扩散的T2I模型的扩展特性,包括在多达6亿张图像的数据集上训练从0.4亿到40亿参数不等的扩展型UNet和Transformer变体。在模型扩展方面,我们发现交叉注意力的位置和数量是区分现有UNet设计性能的关键因素。增加Transformer块比增加通道数更能高效提升参数利用率,从而改善文本与图像的对齐效果。我们随后识别出一种高效的UNet变体,其规模比SDXL的UNet小45%,速度快28%。在数据扩展方面,我们表明训练集的质量和多样性比单纯的数据集规模更为重要。增加标注密度和多样性能够提升文本与图像对齐的性能和学习效率。最后,我们提供了扩展函数,用于预测文本与图像对齐性能作为模型规模、计算量和数据集规模函数的表达式。
English
Scaling up model and data size has been quite successful for the evolution of LLMs. However, the scaling law for the diffusion based text-to-image (T2I) models is not fully explored. It is also unclear how to efficiently scale the model for better performance at reduced cost. The different training settings and expensive training cost make a fair model comparison extremely difficult. In this work, we empirically study the scaling properties of diffusion based T2I models by performing extensive and rigours ablations on scaling both denoising backbones and training set, including training scaled UNet and Transformer variants ranging from 0.4B to 4B parameters on datasets upto 600M images. For model scaling, we find the location and amount of cross attention distinguishes the performance of existing UNet designs. And increasing the transformer blocks is more parameter-efficient for improving text-image alignment than increasing channel numbers. We then identify an efficient UNet variant, which is 45% smaller and 28% faster than SDXL's UNet. On the data scaling side, we show the quality and diversity of the training set matters more than simply dataset size. Increasing caption density and diversity improves text-image alignment performance and the learning efficiency. Finally, we provide scaling functions to predict the text-image alignment performance as functions of the scale of model size, compute and dataset size.

Summary

AI-Generated Summary

PDF190November 26, 2024