ChatPaper.aiChatPaper

雙流擴散網絡在文本到影片生成中的應用

Dual-Stream Diffusion Net for Text-to-Video Generation

August 16, 2023
作者: Binhui Liu, Xin Liu, Anbo Dai, Zhiyong Zeng, Zhen Cui, Jian Yang
cs.AI

摘要

隨著擴散模型的興起,文字轉視訊生成技術近期受到愈發廣泛的關注。然而該領域存在一個重要瓶頸:生成視訊常出現閃爍現象與偽影問題。本研究提出雙流擴散網絡(DSDN),通過增強內容變化的連貫性來提升視訊生成品質。具體而言,所設計的視訊內容與運動雙擴散流不僅能在各自獨立空間中分別生成個性化視訊內容及動態變化,更透過我們設計的跨模組交互轉換器,實現內容域與運動域的精準對齊,從而有效提升生成視訊的流暢度。此外,我們還引入運動分解器與組合器來優化視訊運動操作。定性與定量實驗表明,本方法能生成動態連貫且閃爍顯著減少的驚艷視訊。
English
With the emerging diffusion models, recently, text-to-video generation has aroused increasing attention. But an important bottleneck therein is that generative videos often tend to carry some flickers and artifacts. In this work, we propose a dual-stream diffusion net (DSDN) to improve the consistency of content variations in generating videos. In particular, the designed two diffusion streams, video content and motion branches, could not only run separately in their private spaces for producing personalized video variations as well as content, but also be well-aligned between the content and motion domains through leveraging our designed cross-transformer interaction module, which would benefit the smoothness of generated videos. Besides, we also introduce motion decomposer and combiner to faciliate the operation on video motion. Qualitative and quantitative experiments demonstrate that our method could produce amazing continuous videos with fewer flickers.
PDF243February 8, 2026