AnimateZero:视频扩散模型是零样本图像动画生成器。
AnimateZero: Video Diffusion Models are Zero-Shot Image Animators
December 6, 2023
作者: Jiwen Yu, Xiaodong Cun, Chenyang Qi, Yong Zhang, Xintao Wang, Ying Shan, Jian Zhang
cs.AI
摘要
近年来,大规模文本到视频(T2V)扩散模型在视觉质量、运动和时间一致性方面取得了巨大进展。然而,生成过程仍然是一个黑盒子,其中所有属性(例如外观、运动)都是联合学习和生成的,除了粗略的文本描述之外几乎没有精确的控制能力。受到图像动画的启发,该动画将视频解耦为具有相应运动的特定外观,我们提出了AnimateZero来揭示预训练的文本到视频扩散模型,即AnimateDiff,并为其提供更精确的外观和运动控制能力。对于外观控制,我们从文本到图像(T2I)生成中借用中间潜变量及其特征,以确保生成的第一帧与给定的生成图像相等。对于时间控制,我们用我们提出的位置校正窗口注意力替换原始T2V模型的全局时间注意力,以确保其他帧与第一帧对齐。借助所提出的方法,AnimateZero可以成功控制生成过程,无需进一步训练。作为给定图像的零样本图像动画师,AnimateZero还实现了多个新应用,包括交互式视频生成和真实图像动画。详细实验证明了所提方法在T2V及相关应用中的有效性。
English
Large-scale text-to-video (T2V) diffusion models have great progress in
recent years in terms of visual quality, motion and temporal consistency.
However, the generation process is still a black box, where all attributes
(e.g., appearance, motion) are learned and generated jointly without precise
control ability other than rough text descriptions. Inspired by image animation
which decouples the video as one specific appearance with the corresponding
motion, we propose AnimateZero to unveil the pre-trained text-to-video
diffusion model, i.e., AnimateDiff, and provide more precise appearance and
motion control abilities for it. For appearance control, we borrow intermediate
latents and their features from the text-to-image (T2I) generation for ensuring
the generated first frame is equal to the given generated image. For temporal
control, we replace the global temporal attention of the original T2V model
with our proposed positional-corrected window attention to ensure other frames
align with the first frame well. Empowered by the proposed methods, AnimateZero
can successfully control the generating progress without further training. As a
zero-shot image animator for given images, AnimateZero also enables multiple
new applications, including interactive video generation and real image
animation. The detailed experiments demonstrate the effectiveness of the
proposed method in both T2V and related applications.