FusionFrames:用於文本轉視頻生成管道的高效架構方面
FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline
November 22, 2023
作者: Vladimir Arkhipkin, Zein Shaheen, Viacheslav Vasilev, Elizaveta Dakhova, Andrey Kuznetsov, Denis Dimitrov
cs.AI
摘要
多媒體生成方法在人工智慧研究中佔據重要地位。在過去幾年中,文本到圖像模型取得了高質量的結果。然而,最近開始發展了視頻合成方法。本文提出了一種基於文本到圖像擴散模型的新的兩階段潛在擴散文本到視頻生成架構。第一階段涉及關鍵幀的合成,以描繪視頻的故事情節,而第二階段則專注於插值幀的生成,以使場景和物體的運動平滑。我們比較了幾種用於關鍵幀生成的時間條件方法。結果顯示,在反映視頻生成質量方面的指標和人類偏好方面,使用獨立的時間塊優於時間層的優勢。我們的插值模型設計顯著降低了計算成本,相較於其他遮罩幀插值方法。此外,我們評估了基於MoVQ的視頻解碼方案的不同配置,以提高一致性,實現更高的PSNR、SSIM、MSE和LPIPS分數。最後,我們將我們的流程與現有解決方案進行比較,獲得了整體前兩名的分數,並在開源解決方案中排名第一:CLIPSIM = 0.2976,FVD = 433.054。項目頁面:https://ai-forever.github.io/kandinsky-video/
English
Multimedia generation approaches occupy a prominent place in artificial
intelligence research. Text-to-image models achieved high-quality results over
the last few years. However, video synthesis methods recently started to
develop. This paper presents a new two-stage latent diffusion text-to-video
generation architecture based on the text-to-image diffusion model. The first
stage concerns keyframes synthesis to figure the storyline of a video, while
the second one is devoted to interpolation frames generation to make movements
of the scene and objects smooth. We compare several temporal conditioning
approaches for keyframes generation. The results show the advantage of using
separate temporal blocks over temporal layers in terms of metrics reflecting
video generation quality aspects and human preference. The design of our
interpolation model significantly reduces computational costs compared to other
masked frame interpolation approaches. Furthermore, we evaluate different
configurations of MoVQ-based video decoding scheme to improve consistency and
achieve higher PSNR, SSIM, MSE, and LPIPS scores. Finally, we compare our
pipeline with existing solutions and achieve top-2 scores overall and top-1
among open-source solutions: CLIPSIM = 0.2976 and FVD = 433.054. Project page:
https://ai-forever.github.io/kandinsky-video/