ChatPaper.aiChatPaper

文本到视频生成的双流扩散网络

Dual-Stream Diffusion Net for Text-to-Video Generation

August 16, 2023
作者: Binhui Liu, Xin Liu, Anbo Dai, Zhiyong Zeng, Zhen Cui, Jian Yang
cs.AI

摘要

随着新兴的扩散模型,最近,文本到视频生成引起了越来越多的关注。但其中一个重要瓶颈是,生成的视频往往会出现一些闪烁和伪影。在这项工作中,我们提出了一种双流扩散网络(DSDN),以提高生成视频中内容变化的一致性。特别是,设计的两个扩散流,视频内容和运动分支,不仅可以在它们各自的私有空间中分别运行,以生成个性化视频变化和内容,而且通过利用我们设计的交叉变换器交互模块,在内容和运动领域之间实现良好对齐,从而有利于生成视频的平滑度。此外,我们还引入了运动分解器和合成器,以促进对视频运动的操作。定性和定量实验表明,我们的方法能够生成具有更少闪烁的惊人连续视频。
English
With the emerging diffusion models, recently, text-to-video generation has aroused increasing attention. But an important bottleneck therein is that generative videos often tend to carry some flickers and artifacts. In this work, we propose a dual-stream diffusion net (DSDN) to improve the consistency of content variations in generating videos. In particular, the designed two diffusion streams, video content and motion branches, could not only run separately in their private spaces for producing personalized video variations as well as content, but also be well-aligned between the content and motion domains through leveraging our designed cross-transformer interaction module, which would benefit the smoothness of generated videos. Besides, we also introduce motion decomposer and combiner to faciliate the operation on video motion. Qualitative and quantitative experiments demonstrate that our method could produce amazing continuous videos with fewer flickers.
PDF243December 15, 2024