ChatPaper.aiChatPaper

重新渲染视频:零样本文本引导的视频到视频翻译

Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

June 13, 2023
作者: Shuai Yang, Yifan Zhou, Ziwei Liu, Chen Change Loy
cs.AI

摘要

大规模文本到图像扩散模型展现出在生成高质量图像方面的出色能力。然而,将这些模型应用于视频领域时,确保视频帧之间的时间一致性仍然是一个艰巨的挑战。本文提出了一种新颖的零样本文本引导视频到视频翻译框架,以将图像模型调整到视频中。该框架包括两个部分:关键帧翻译和完整视频翻译。第一部分使用经过调整的扩散模型生成关键帧,应用分层交叉帧约束以强制形状、纹理和颜色的连贯性。第二部分通过时间感知补丁匹配和帧混合将关键帧传播到其他帧。我们的框架以较低成本(无需重新训练或优化)实现了全局风格和局部纹理的时间一致性。该适应性与现有图像扩散技术兼容,使我们的框架能够利用它们,例如使用LoRA自定义特定主题,并使用ControlNet引入额外的空间引导。大量实验结果表明,我们提出的框架在呈现高质量和时间连贯的视频方面比现有方法更有效。
English
Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos.
PDF11111December 15, 2024