融合框架:面向文本到视频生成流程的高效架构设计
FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline
November 22, 2023
作者: Vladimir Arkhipkin, Zein Shaheen, Viacheslav Vasilev, Elizaveta Dakhova, Andrey Kuznetsov, Denis Dimitrov
cs.AI
摘要
多媒体生成方法在人工智能研究领域占据重要地位。文本到图像模型在过去几年已实现高质量成果,而视频合成方法近期才开始发展。本文提出一种基于文本到图像扩散模型的新型两阶段潜在扩散文本到视频生成架构:第一阶段通过关键帧合成构建视频叙事线索,第二阶段专注于插值帧生成以实现场景与物体的平滑运动。我们比较了多种时序条件处理方法在关键帧生成中的表现,结果表明采用独立时序模块的方案在视频生成质量指标和人类偏好评估中均优于时序层集成方案。所设计的插值模型相较于其他掩码帧插值方法显著降低了计算成本。此外,我们评估了基于MoVQ的视频解码方案的不同配置,以提升连贯性并获得更高的PSNR、SSIM、MSE和LPIPS评分。最终通过与现有解决方案的对比,我们的管道在整体评估中位列第二,在开源方案中排名第一:CLIPSIM=0.2976,FVD=433.054。项目页面:https://ai-forever.github.io/kandinsky-video/
English
Multimedia generation approaches occupy a prominent place in artificial
intelligence research. Text-to-image models achieved high-quality results over
the last few years. However, video synthesis methods recently started to
develop. This paper presents a new two-stage latent diffusion text-to-video
generation architecture based on the text-to-image diffusion model. The first
stage concerns keyframes synthesis to figure the storyline of a video, while
the second one is devoted to interpolation frames generation to make movements
of the scene and objects smooth. We compare several temporal conditioning
approaches for keyframes generation. The results show the advantage of using
separate temporal blocks over temporal layers in terms of metrics reflecting
video generation quality aspects and human preference. The design of our
interpolation model significantly reduces computational costs compared to other
masked frame interpolation approaches. Furthermore, we evaluate different
configurations of MoVQ-based video decoding scheme to improve consistency and
achieve higher PSNR, SSIM, MSE, and LPIPS scores. Finally, we compare our
pipeline with existing solutions and achieve top-2 scores overall and top-1
among open-source solutions: CLIPSIM = 0.2976 and FVD = 433.054. Project page:
https://ai-forever.github.io/kandinsky-video/