SiT:探索具有可擴展插值變壓器的流動和基於擴散的生成模型
SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers
January 16, 2024
作者: Nanye Ma, Mark Goldstein, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden, Saining Xie
cs.AI
摘要
我們提出了可擴展插值轉換器(Scalable Interpolant Transformers,SiT),這是一系列建立在擴散轉換器(Diffusion Transformers,DiT)骨幹上的生成模型。插值框架允許以比標準擴散模型更靈活的方式連接兩個分布,這使得對建立在動態傳輸上的生成模型的各種設計選擇進行模塊化研究成為可能:使用離散或連續時間學習、確定模型學習的目標、選擇連接分布的插值器,以及部署確定性或隨機抽樣器。通過精心引入上述要素,SiT在條件ImageNet 256x256基準測試中,使用完全相同的骨幹、參數數量和GFLOPs,全面超越了DiT。通過探索各種可以與學習分開調整的擴散係數,SiT實現了2.06的FID-50K分數。
English
We present Scalable Interpolant Transformers (SiT), a family of generative
models built on the backbone of Diffusion Transformers (DiT). The interpolant
framework, which allows for connecting two distributions in a more flexible way
than standard diffusion models, makes possible a modular study of various
design choices impacting generative models built on dynamical transport: using
discrete vs. continuous time learning, deciding the objective for the model to
learn, choosing the interpolant connecting the distributions, and deploying a
deterministic or stochastic sampler. By carefully introducing the above
ingredients, SiT surpasses DiT uniformly across model sizes on the conditional
ImageNet 256x256 benchmark using the exact same backbone, number of parameters,
and GFLOPs. By exploring various diffusion coefficients, which can be tuned
separately from learning, SiT achieves an FID-50K score of 2.06.