生成中间帧:调整图像到视频模型以用于关键帧插值

Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation

August 27, 2024
作者: Xiaojuan Wang, Boyang Zhou, Brian Curless, Ira Kemelmacher-Shlizerman, Aleksander Holynski, Steven M. Seitz
cs.AI

摘要

我们提出了一种生成视频序列的方法,其中包含一对输入关键帧之间连贯运动的内容。我们调整了一个预训练的大规模图像到视频扩散模型(最初是针对从单个输入图像生成向前运动视频的),用于关键帧插值,即在两个输入帧之间生成视频。我们通过一种轻量级微调技术来实现这种调整,该技术生成了一个模型的版本,该模型相反地从单个输入图像预测向后运动的视频。这个模型(以及原始的向前运动模型)随后用于双向扩散采样过程,该过程结合了从两个关键帧开始的重叠模型估计。我们的实验表明,我们的方法优于现有基于扩散的方法和传统的帧插值技术。
English
We present a method for generating video sequences with coherent motion between a pair of input key frames. We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input image) for key frame interpolation, i.e., to produce a video in between two input frames. We accomplish this adaptation through a lightweight fine-tuning technique that produces a version of the model that instead predicts videos moving backwards in time from a single input image. This model (along with the original forward-moving model) is subsequently used in a dual-directional diffusion sampling process that combines the overlapping model estimates starting from each of the two keyframes. Our experiments show that our method outperforms both existing diffusion-based methods and traditional frame interpolation techniques.

Summary

AI-Generated Summary

PDF312November 16, 2024