動態貼圖:利用視頻擴散賦予貼圖生命
Animated Stickers: Bringing Stickers to Life with Video Diffusion
February 8, 2024
作者: David Yan, Winnie Zhang, Luxin Zhang, Anmol Kalia, Dingkang Wang, Ankit Ramchandani, Miao Liu, Albert Pumarola, Edgar Schoenfeld, Elliot Blanchard, Krishna Narni, Yaqiao Luo, Lawrence Chen, Guan Pang, Ali Thabet, Peter Vajda, Amy Bearman, Licheng Yu
cs.AI
摘要
我們介紹了動態貼圖,這是一種視頻擴散模型,可以生成根據文本提示和靜態貼圖圖像條件的動畫。我們的模型建立在最先進的 Emu 文本到圖像模型之上,並添加了時間層來模擬運動。由於領域差異,即視覺和運動風格的差異,一個在生成自然視頻方面表現良好的模型,當應用於貼圖時就無法生成生動的視頻。為了彌合這一差距,我們採用了兩階段微調流程:首先使用弱領域數據,然後採用我們稱之為教師集成的人機協作(HITL)策略。它將多個教師的最佳特質提煉為一個更小的學生模型。我們展示了這一策略使我們能夠針對運動質量的改進進行特定定向,同時保持靜態圖像的風格。通過推理優化,我們的模型能夠在不到一秒的時間內生成一個包含八幀高質量、有趣且相關運動的視頻。
English
We introduce animated stickers, a video diffusion model which generates an
animation conditioned on a text prompt and static sticker image. Our model is
built on top of the state-of-the-art Emu text-to-image model, with the addition
of temporal layers to model motion. Due to the domain gap, i.e. differences in
visual and motion style, a model which performed well on generating natural
videos can no longer generate vivid videos when applied to stickers. To bridge
this gap, we employ a two-stage finetuning pipeline: first with weakly
in-domain data, followed by human-in-the-loop (HITL) strategy which we term
ensemble-of-teachers. It distills the best qualities of multiple teachers into
a smaller student model. We show that this strategy allows us to specifically
target improvements to motion quality while maintaining the style from the
static image. With inference optimizations, our model is able to generate an
eight-frame video with high-quality, interesting, and relevant motion in under
one second.