TRIP:利用图像噪声先验进行时间残差学习的图像到视频扩散模型
TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models
March 25, 2024
作者: Zhongwei Zhang, Fuchen Long, Yingwei Pan, Zhaofan Qiu, Ting Yao, Yang Cao, Tao Mei
cs.AI
摘要
最近在文本到视频生成领域取得的进展展示了强大扩散模型的实用性。然而,当将扩散模型塑造成为动画静态图像(即图像到视频生成)时,这个问题并不是微不足道的。困难源自于后续动画帧的扩散过程不仅应保持与给定图像的忠实对齐,还应在相邻帧之间追求时间上的连贯性。为了缓解这一问题,我们提出了TRIP,这是一种新的图像到视频扩散范式,依赖于从静态图像中导出的图像噪声先验,共同触发帧间关系推理并通过时间残差学习简化连贯的时间建模。在技术上,图像噪声先验首先通过基于静态图像和加噪视频潜在编码的单步向后扩散过程获得。接下来,TRIP执行一种类似残差的双路径方案进行噪声预测:1)一条快捷路径,直接将图像噪声先验作为每帧的参考噪声,以增强第一帧与后续帧之间的对齐;2)一条残差路径,利用3D-UNet覆盖加噪视频和静态图像潜在编码,实现帧间关系推理,从而简化每帧残差噪声的学习。此外,每帧的参考和残差噪声通过注意机制动态合并,用于最终视频生成。在WebVid-10M、DTDB和MSR-VTT数据集上进行的大量实验表明了我们的TRIP在图像到视频生成中的有效性。请访问我们的项目页面https://trip-i2v.github.io/TRIP/。
English
Recent advances in text-to-video generation have demonstrated the utility of
powerful diffusion models. Nevertheless, the problem is not trivial when
shaping diffusion models to animate static image (i.e., image-to-video
generation). The difficulty originates from the aspect that the diffusion
process of subsequent animated frames should not only preserve the faithful
alignment with the given image but also pursue temporal coherence among
adjacent frames. To alleviate this, we present TRIP, a new recipe of
image-to-video diffusion paradigm that pivots on image noise prior derived from
static image to jointly trigger inter-frame relational reasoning and ease the
coherent temporal modeling via temporal residual learning. Technically, the
image noise prior is first attained through one-step backward diffusion process
based on both static image and noised video latent codes. Next, TRIP executes a
residual-like dual-path scheme for noise prediction: 1) a shortcut path that
directly takes image noise prior as the reference noise of each frame to
amplify the alignment between the first frame and subsequent frames; 2) a
residual path that employs 3D-UNet over noised video and static image latent
codes to enable inter-frame relational reasoning, thereby easing the learning
of the residual noise for each frame. Furthermore, both reference and residual
noise of each frame are dynamically merged via attention mechanism for final
video generation. Extensive experiments on WebVid-10M, DTDB and MSR-VTT
datasets demonstrate the effectiveness of our TRIP for image-to-video
generation. Please see our project page at https://trip-i2v.github.io/TRIP/.Summary
AI-Generated Summary