ChatPaper.aiChatPaper

分解式夢想者:使用有限和低質量數據訓練高質量視頻生成器

Factorized-Dreamer: Training A High-Quality Video Generator with Limited and Low-Quality Data

August 19, 2024
作者: Tao Yang, Yangming Shi, Yunwen Huang, Feng Chen, Yin Zheng, Lei Zhang
cs.AI

摘要

文字轉視頻(T2V)生成因其廣泛應用於視頻生成、編輯、增強和翻譯等領域而受到重視。然而,高質量(HQ)視頻合成極具挑戰性,因為現實世界中存在多樣且複雜的運動。大多數現有作品難以解決這個問題,因為它們需要收集大規模的高質量視頻,這對社區來說是不可及的。在這項工作中,我們展示了公開可用的有限和低質量(LQ)數據足以訓練一個HQ視頻生成器,而無需重新標註或微調。我們將整個T2V生成過程分解為兩個步驟:生成一幅以高度描述性標題為條件的圖像,以及在生成的圖像和簡潔的運動細節標題的條件下合成視頻。具體而言,我們提出了Factorized-Dreamer,這是一個分解的時空框架,具有幾個關鍵設計,用於T2V生成,包括一個適配器來結合文本和圖像嵌入、一個像素感知的交叉注意力模塊來捕捉像素級圖像信息、一個T5文本編碼器來更好地理解運動描述,以及一個PredictNet來監督光流。我們進一步提出了一個噪聲時間表,在確保視頻生成的質量和穩定性方面發揮關鍵作用。我們的模型降低了對詳細標題和HQ視頻的要求,可以直接在有限的LQ數據集上進行訓練,這些數據集具有嘈雜且簡短的標題,例如WebVid-10M,很大程度上減輕了收集大規模HQ視頻文本對的成本。在各種T2V和圖像到視頻生成任務中進行了大量實驗,證明了我們提出的Factorized-Dreamer的有效性。我們的源代碼可在https://github.com/yangxy/Factorized-Dreamer/ 上找到。
English
Text-to-video (T2V) generation has gained significant attention due to its wide applications to video generation, editing, enhancement and translation, \etc. However, high-quality (HQ) video synthesis is extremely challenging because of the diverse and complex motions existed in real world. Most existing works struggle to address this problem by collecting large-scale HQ videos, which are inaccessible to the community. In this work, we show that publicly available limited and low-quality (LQ) data are sufficient to train a HQ video generator without recaptioning or finetuning. We factorize the whole T2V generation process into two steps: generating an image conditioned on a highly descriptive caption, and synthesizing the video conditioned on the generated image and a concise caption of motion details. Specifically, we present Factorized-Dreamer, a factorized spatiotemporal framework with several critical designs for T2V generation, including an adapter to combine text and image embeddings, a pixel-aware cross attention module to capture pixel-level image information, a T5 text encoder to better understand motion description, and a PredictNet to supervise optical flows. We further present a noise schedule, which plays a key role in ensuring the quality and stability of video generation. Our model lowers the requirements in detailed captions and HQ videos, and can be directly trained on limited LQ datasets with noisy and brief captions such as WebVid-10M, largely alleviating the cost to collect large-scale HQ video-text pairs. Extensive experiments in a variety of T2V and image-to-video generation tasks demonstrate the effectiveness of our proposed Factorized-Dreamer. Our source codes are available at https://github.com/yangxy/Factorized-Dreamer/.

Summary

AI-Generated Summary

PDF173November 19, 2024