VideoFactory:在時空擴散中進行交換注意力以進行文本到視頻生成
VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation
May 18, 2023
作者: Wenjing Wang, Huan Yang, Zixi Tuo, Huiguo He, Junchen Zhu, Jianlong Fu, Jiaying Liu
cs.AI
摘要
我們提出了VideoFactory,這是一個創新的框架,用於生成高質量的開放領域視頻。VideoFactory擅長製作無水印的高清(1376x768)、寬屏(16:9)視頻,創造引人入勝的用戶體驗。根據文本指導生成視頻面臨著重大挑戰,例如建模空間和時間之間的複雜關係,以及缺乏大規模的文本-視頻配對數據。先前的方法通過為視頻生成添加時間1D卷積/注意模塊來擴展預訓練的文本到圖像生成模型。然而,這些方法忽略了聯合建模空間和時間的重要性,不可避免地導致時間失真和文本與視頻之間的不對齊。在本文中,我們提出了一種新穎的方法,加強了空間和時間感知之間的互動。具體來說,我們利用了一種在3D窗口中交換的交叉注意機制,交替在空間和時間塊之間扮演“查詢”角色,從而實現彼此的相互增強。為了充分發揮模型在高質量視頻生成方面的能力,我們編纂了一個名為HD-VG-130M的大規模視頻數據集。該數據集包含來自開放領域的1.3億個文本-視頻對,確保高清晰度、寬屏和無水印特性。客觀指標和用戶研究證明了我們方法在每幀質量、時間相關性和文本-視頻對齊方面的優越性,具有明顯的優勢。
English
We present VideoFactory, an innovative framework for generating high-quality
open-domain videos. VideoFactory excels in producing high-definition
(1376x768), widescreen (16:9) videos without watermarks, creating an engaging
user experience. Generating videos guided by text instructions poses
significant challenges, such as modeling the complex relationship between space
and time, and the lack of large-scale text-video paired data. Previous
approaches extend pretrained text-to-image generation models by adding temporal
1D convolution/attention modules for video generation. However, these
approaches overlook the importance of jointly modeling space and time,
inevitably leading to temporal distortions and misalignment between texts and
videos. In this paper, we propose a novel approach that strengthens the
interaction between spatial and temporal perceptions. In particular, we utilize
a swapped cross-attention mechanism in 3D windows that alternates the "query"
role between spatial and temporal blocks, enabling mutual reinforcement for
each other. To fully unlock model capabilities for high-quality video
generation, we curate a large-scale video dataset called HD-VG-130M. This
dataset comprises 130 million text-video pairs from the open-domain, ensuring
high-definition, widescreen and watermark-free characters. Objective metrics
and user studies demonstrate the superiority of our approach in terms of
per-frame quality, temporal correlation, and text-video alignment, with clear
margins.