FusionFrames:文本到视频生成管道的高效架构方面
FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline
November 22, 2023
作者: Vladimir Arkhipkin, Zein Shaheen, Viacheslav Vasilev, Elizaveta Dakhova, Andrey Kuznetsov, Denis Dimitrov
cs.AI
摘要
多媒体生成方法在人工智能研究中占据重要地位。在过去几年中,文本到图像模型取得了高质量的结果。然而,最近开始发展视频合成方法。本文提出了一种基于文本到图像扩散模型的新的两阶段潜在扩散文本到视频生成架构。第一阶段涉及关键帧合成,以描绘视频的故事情节,而第二阶段致力于插值帧生成,使场景和物体的移动更加平滑。我们比较了几种用于关键帧生成的时间条件方法。结果显示,与反映视频生成质量方面的指标和人类偏好相比,使用单独的时间块优于时间层。我们的插值模型设计显著降低了计算成本,与其他遮罩帧插值方法相比。此外,我们评估了基于MoVQ的视频解码方案的不同配置,以提高一致性并实现更高的PSNR、SSIM、MSE和LPIPS分数。最后,我们将我们的流水线与现有解决方案进行了比较,并在整体上取得了前两名的成绩,在开源解决方案中排名第一:CLIPSIM = 0.2976,FVD = 433.054。项目页面:https://ai-forever.github.io/kandinsky-video/
English
Multimedia generation approaches occupy a prominent place in artificial
intelligence research. Text-to-image models achieved high-quality results over
the last few years. However, video synthesis methods recently started to
develop. This paper presents a new two-stage latent diffusion text-to-video
generation architecture based on the text-to-image diffusion model. The first
stage concerns keyframes synthesis to figure the storyline of a video, while
the second one is devoted to interpolation frames generation to make movements
of the scene and objects smooth. We compare several temporal conditioning
approaches for keyframes generation. The results show the advantage of using
separate temporal blocks over temporal layers in terms of metrics reflecting
video generation quality aspects and human preference. The design of our
interpolation model significantly reduces computational costs compared to other
masked frame interpolation approaches. Furthermore, we evaluate different
configurations of MoVQ-based video decoding scheme to improve consistency and
achieve higher PSNR, SSIM, MSE, and LPIPS scores. Finally, we compare our
pipeline with existing solutions and achieve top-2 scores overall and top-1
among open-source solutions: CLIPSIM = 0.2976 and FVD = 433.054. Project page:
https://ai-forever.github.io/kandinsky-video/