ChatPaper.aiChatPaper

文本到視頻生成的分層時空解耦

Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation

December 7, 2023
作者: Zhiwu Qing, Shiwei Zhang, Jiayu Wang, Xiang Wang, Yujie Wei, Yingya Zhang, Changxin Gao, Nong Sang
cs.AI

摘要

儘管擴散模型已顯示出生成逼真圖像的強大能力,但生成逼真且多樣化的影片仍處於起步階段。其中一個關鍵原因是當前方法將空間內容和時間動態糾纏在一起,導致文本到影片生成(T2V)的複雜度明顯增加。在這項工作中,我們提出了HiGen,一種基於擴散模型的方法,通過從結構級別和內容級別兩個角度解耦影片的空間和時間因素,從而提高性能。在結構級別上,我們將T2V任務分解為兩個步驟,包括空間推理和時間推理,使用統一的去噪器。具體來說,在空間推理期間使用文本生成空間上一致的先驗,然後在時間推理期間從這些先驗生成時間上一致的運動。在內容級別上,我們從輸入影片的內容中提取兩種微妙的線索,分別可以表達運動和外觀變化。這兩種線索然後引導模型的訓練以生成影片,實現靈活的內容變化並增強時間穩定性。通過解耦的範式,HiGen能夠有效降低這一任務的複雜度,生成具有語義準確性和運動穩定性的逼真影片。大量實驗證明了HiGen相對於最先進的T2V方法的優越性能。
English
Despite diffusion models having shown powerful abilities to generate photorealistic images, generating videos that are realistic and diverse still remains in its infancy. One of the key reasons is that current methods intertwine spatial content and temporal dynamics together, leading to a notably increased complexity of text-to-video generation (T2V). In this work, we propose HiGen, a diffusion model-based method that improves performance by decoupling the spatial and temporal factors of videos from two perspectives, i.e., structure level and content level. At the structure level, we decompose the T2V task into two steps, including spatial reasoning and temporal reasoning, using a unified denoiser. Specifically, we generate spatially coherent priors using text during spatial reasoning and then generate temporally coherent motions from these priors during temporal reasoning. At the content level, we extract two subtle cues from the content of the input video that can express motion and appearance changes, respectively. These two cues then guide the model's training for generating videos, enabling flexible content variations and enhancing temporal stability. Through the decoupled paradigm, HiGen can effectively reduce the complexity of this task and generate realistic videos with semantics accuracy and motion stability. Extensive experiments demonstrate the superior performance of HiGen over the state-of-the-art T2V methods.
PDF71December 15, 2024