FreeInit:填補視頻擴散模型中的初始化差距
FreeInit: Bridging Initialization Gap in Video Diffusion Models
December 12, 2023
作者: Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu
cs.AI
摘要
儘管基於擴散的影片生成技術取得了快速進展,現有模型的推論結果仍然表現出令人不滿的時間一致性和不自然的動態。本文深入探討影片擴散模型中的噪聲初始化問題,發現一個隱含的訓練-推論差距導致了推論品質不佳。我們的關鍵發現是:1)推論時刻的初始潛在空間-時間頻率分佈與訓練時 intrinsically 不同,以及2)去噪過程受到初始噪聲的低頻組件的顯著影響。受到這些觀察的啟發,我們提出了一種簡潔而有效的推論取樣策略,名為 FreeInit,顯著改善了擴散模型生成的影片的時間一致性。通過在推論過程中迭代地精煉初始潛在的空間-時間低頻組件,FreeInit 能夠彌補訓練和推論之間的初始化差距,從而有效改善生成結果的主題外觀和時間一致性。大量實驗表明,FreeInit 在不需額外訓練的情況下,持續增強了各種文本到影片生成模型的生成結果。
English
Though diffusion-based video generation has witnessed rapid progress, the
inference results of existing models still exhibit unsatisfactory temporal
consistency and unnatural dynamics. In this paper, we delve deep into the noise
initialization of video diffusion models, and discover an implicit
training-inference gap that attributes to the unsatisfactory inference quality.
Our key findings are: 1) the spatial-temporal frequency distribution of the
initial latent at inference is intrinsically different from that for training,
and 2) the denoising process is significantly influenced by the low-frequency
components of the initial noise. Motivated by these observations, we propose a
concise yet effective inference sampling strategy, FreeInit, which
significantly improves temporal consistency of videos generated by diffusion
models. Through iteratively refining the spatial-temporal low-frequency
components of the initial latent during inference, FreeInit is able to
compensate the initialization gap between training and inference, thus
effectively improving the subject appearance and temporal consistency of
generation results. Extensive experiments demonstrate that FreeInit
consistently enhances the generation results of various text-to-video
generation models without additional training.