动态贴纸:利用视频扩散赋予贴纸生命
Animated Stickers: Bringing Stickers to Life with Video Diffusion
February 8, 2024
作者: David Yan, Winnie Zhang, Luxin Zhang, Anmol Kalia, Dingkang Wang, Ankit Ramchandani, Miao Liu, Albert Pumarola, Edgar Schoenfeld, Elliot Blanchard, Krishna Narni, Yaqiao Luo, Lawrence Chen, Guan Pang, Ali Thabet, Peter Vajda, Amy Bearman, Licheng Yu
cs.AI
摘要
我们介绍了动态贴纸,这是一个视频扩散模型,它根据文本提示和静态贴纸图像生成动画。我们的模型建立在最先进的Emu文本到图像模型之上,增加了时间层来建模运动。由于领域差异,即视觉和运动风格的差异,一个在生成自然视频方面表现良好的模型,当应用于贴纸时,就无法生成生动的视频。为了弥合这一差距,我们采用了一个两阶段微调流程:首先使用弱领域数据,然后采用我们称之为教师集成的人机协作(HITL)策略。它将多个教师的最佳特质提炼到一个更小的学生模型中。我们展示了这一策略使我们能够 gezi'zhuanmen地针对提高运动质量,同时保持静态图像风格。通过推理优化,我们的模型能够在不到一秒的时间内生成一个包含八帧高质量、有趣且相关运动的视频。
English
We introduce animated stickers, a video diffusion model which generates an
animation conditioned on a text prompt and static sticker image. Our model is
built on top of the state-of-the-art Emu text-to-image model, with the addition
of temporal layers to model motion. Due to the domain gap, i.e. differences in
visual and motion style, a model which performed well on generating natural
videos can no longer generate vivid videos when applied to stickers. To bridge
this gap, we employ a two-stage finetuning pipeline: first with weakly
in-domain data, followed by human-in-the-loop (HITL) strategy which we term
ensemble-of-teachers. It distills the best qualities of multiple teachers into
a smaller student model. We show that this strategy allows us to specifically
target improvements to motion quality while maintaining the style from the
static image. With inference optimizations, our model is able to generate an
eight-frame video with high-quality, interesting, and relevant motion in under
one second.