套娃擴散模型
Matryoshka Diffusion Models
October 23, 2023
作者: Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Josh Susskind, Navdeep Jaitly
cs.AI
摘要
擴散模型是生成高品質圖像和影片的事實上方法,但由於計算和優化挑戰,學習高維模型仍然是一個艱鉅的任務。現有方法通常採用在像素空間中訓練級聯模型,或使用單獨訓練的自編碼器的降採樣潛在空間。在本文中,我們介紹了俄羅斯套娃擴散模型(MDM),這是一個用於高分辨率圖像和影片合成的端對端框架。我們提出了一個擴散過程,同時在多個分辨率上對輸入進行降噪,並使用NestedUNet架構,其中小尺度輸入的特徵和參數被嵌套在大尺度輸入的內部。此外,MDM實現了從低到高分辨率的漸進式訓練時間表,這對於高分辨率生成的優化帶來了顯著的改進。我們在各種基準測試中展示了我們方法的有效性,包括類別條件圖像生成、高分辨率文本轉圖像和文本轉影片應用。值得注意的是,我們可以在最高達1024x1024像素的分辨率下訓練單一像素空間模型,展示了在僅包含1200萬圖像的CC12M數據集上使用強大的零樣本泛化。
English
Diffusion models are the de facto approach for generating high-quality images
and videos, but learning high-dimensional models remains a formidable task due
to computational and optimization challenges. Existing methods often resort to
training cascaded models in pixel space or using a downsampled latent space of
a separately trained auto-encoder. In this paper, we introduce Matryoshka
Diffusion Models(MDM), an end-to-end framework for high-resolution image and
video synthesis. We propose a diffusion process that denoises inputs at
multiple resolutions jointly and uses a NestedUNet architecture where features
and parameters for small-scale inputs are nested within those of large scales.
In addition, MDM enables a progressive training schedule from lower to higher
resolutions, which leads to significant improvements in optimization for
high-resolution generation. We demonstrate the effectiveness of our approach on
various benchmarks, including class-conditioned image generation,
high-resolution text-to-image, and text-to-video applications. Remarkably, we
can train a single pixel-space model at resolutions of up to 1024x1024 pixels,
demonstrating strong zero-shot generalization using the CC12M dataset, which
contains only 12 million images.