讓像素起舞:高動態影片生成
Make Pixels Dance: High-Dynamic Video Generation
November 18, 2023
作者: Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, Hang Li
cs.AI
摘要
在人工智慧領域中,創建高動態影片,如充滿動作和複雜視覺效果的影片,是一項重大挑戰。不幸的是,目前的最先進影片生成方法主要集中在文本到影片生成,儘管保持高保真度,但往往會產生動作極少的影片片段。我們認為僅依賴文本指示對於影片生成來說是不足夠且次優的。在本文中,我們介紹了PixelDance,這是一種基於擴散模型的新方法,該方法結合了圖像指示和文本指示,用於影片生成的第一幀和最後一幀。全面的實驗結果顯示,使用公共數據訓練的PixelDance在合成具有複雜場景和精細動作的影片方面表現出顯著更好的能力,為影片生成設定了新的標準。
English
Creating high-dynamic videos such as motion-rich actions and sophisticated
visual effects poses a significant challenge in the field of artificial
intelligence. Unfortunately, current state-of-the-art video generation methods,
primarily focusing on text-to-video generation, tend to produce video clips
with minimal motions despite maintaining high fidelity. We argue that relying
solely on text instructions is insufficient and suboptimal for video
generation. In this paper, we introduce PixelDance, a novel approach based on
diffusion models that incorporates image instructions for both the first and
last frames in conjunction with text instructions for video generation.
Comprehensive experimental results demonstrate that PixelDance trained with
public data exhibits significantly better proficiency in synthesizing videos
with complex scenes and intricate motions, setting a new standard for video
generation.