漸進自回歸影像擴散模型
Progressive Autoregressive Video Diffusion Models
October 10, 2024
作者: Desai Xie, Zhan Xu, Yicong Hong, Hao Tan, Difan Liu, Feng Liu, Arie Kaufman, Yang Zhou
cs.AI
摘要
目前前沿的影片擴散模型展現出在生成高品質影片方面的卓越成果。然而,由於訓練過程中的計算限制,這些模型僅能生成短影片片段,通常約為10秒或240幀。在這項研究中,我們展示現有模型可以自然擴展為自回歸影片擴散模型,而無需改變架構。我們的關鍵想法是將潛在幀逐漸增加噪音水平,而非單一噪音水平,這有助於在潛在幀之間實現細粒度條件和注意力窗口之間的大重疊。這種漸進式影片降噪使我們的模型能夠自回歸生成影片幀,而不會出現品質下降或突然場景變化。我們在長影片生成方面呈現了最先進的結果,達到1分鐘(24 FPS下的1440幀)。本文影片可在以下網址獲得:https://desaixie.github.io/pa-vdm/。
English
Current frontier video diffusion models have demonstrated remarkable results
at generating high-quality videos. However, they can only generate short video
clips, normally around 10 seconds or 240 frames, due to computation limitations
during training. In this work, we show that existing models can be naturally
extended to autoregressive video diffusion models without changing the
architectures. Our key idea is to assign the latent frames with progressively
increasing noise levels rather than a single noise level, which allows for
fine-grained condition among the latents and large overlaps between the
attention windows. Such progressive video denoising allows our models to
autoregressively generate video frames without quality degradation or abrupt
scene changes. We present state-of-the-art results on long video generation at
1 minute (1440 frames at 24 FPS). Videos from this paper are available at
https://desaixie.github.io/pa-vdm/.Summary
AI-Generated Summary