ChatPaper.aiChatPaper

重用与扩散:文本到视频生成的迭代去噪

Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation

September 7, 2023
作者: Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei Zhang, Yu-Gang Jiang, Hang Xu
cs.AI

摘要

受潜在扩散模型(LDMs)在图像合成方面显著成功的启发,我们研究了用于文本到视频生成的LDM,这是一个巨大的挑战,因为在模型训练和推断过程中存在计算和内存限制。单个LDM通常只能生成非常有限数量的视频帧。一些现有作品专注于为生成更多视频帧而设计单独的预测模型,但这些模型会导致额外的训练成本和帧级抖动。在本文中,我们提出了一个名为“重用和扩散”的框架,称为VidRD,以在LDM已生成的帧后生成更多帧。在初始视频剪辑的条件下,通过重用原始潜在特征并遵循先前的扩散过程,迭代生成额外帧。此外,对于用于在像素空间和潜在空间之间进行转换的自动编码器,我们向其解码器中注入时间层,并微调这些层以获得更高的时间一致性。我们还提出了一组策略,用于组合视频文本数据,其中包括来自多个现有数据集的多样内容,包括用于动作识别的视频数据集和图像文本数据集。大量实验证明我们的方法在定量和定性评估中取得了良好的结果。我们的项目页面可在https://anonymous0x233.github.io/ReuseAndDiffuse/{here}找到。
English
Inspired by the remarkable success of Latent Diffusion Models (LDMs) for image synthesis, we study LDM for text-to-video generation, which is a formidable challenge due to the computational and memory constraints during both model training and inference. A single LDM is usually only capable of generating a very limited number of video frames. Some existing works focus on separate prediction models for generating more video frames, which suffer from additional training cost and frame-level jittering, however. In this paper, we propose a framework called "Reuse and Diffuse" dubbed VidRD to produce more frames following the frames already generated by an LDM. Conditioned on an initial video clip with a small number of frames, additional frames are iteratively generated by reusing the original latent features and following the previous diffusion process. Besides, for the autoencoder used for translation between pixel space and latent space, we inject temporal layers into its decoder and fine-tune these layers for higher temporal consistency. We also propose a set of strategies for composing video-text data that involve diverse content from multiple existing datasets including video datasets for action recognition and image-text datasets. Extensive experiments show that our method achieves good results in both quantitative and qualitative evaluations. Our project page is available https://anonymous0x233.github.io/ReuseAndDiffuse/{here}.
PDF60December 15, 2024