ChatPaper.aiChatPaper

PEEKABOO:通过遮蔽扩散实现交互式视频生成

PEEKABOO: Interactive Video Generation via Masked-Diffusion

December 12, 2023
作者: Yash Jain, Anshul Nasery, Vibhav Vineet, Harkirat Behl
cs.AI

摘要

最近,在文本到视频生成领域取得了许多进展,最先进的模型能够生成高质量、逼真的视频。然而,这些模型缺乏用户交互控制和生成视频的能力,这可能会开启新的应用领域。作为实现这一目标的第一步,我们着手解决了为扩散式视频生成模型赋予交互式时空控制能力的问题。为此,我们从最近在分割文献中的进展中汲取灵感,提出了一种新颖的时空遮罩注意力模块 - Peekaboo。这个模块是一个无需训练、无推理开销的附加组件,可用于现成的视频生成模型,实现时空控制。我们还提出了一个用于交互式视频生成任务的评估基准。通过广泛的定性和定量评估,我们确认Peekaboo实现了视频生成的控制,甚至在mIoU上获得了高达3.8倍的增益,超过了基准模型。
English
Recently there has been a lot of progress in text-to-video generation, with state-of-the-art models being capable of generating high quality, realistic videos. However, these models lack the capability for users to interactively control and generate videos, which can potentially unlock new areas of application. As a first step towards this goal, we tackle the problem of endowing diffusion-based video generation models with interactive spatio-temporal control over their output. To this end, we take inspiration from the recent advances in segmentation literature to propose a novel spatio-temporal masked attention module - Peekaboo. This module is a training-free, no-inference-overhead addition to off-the-shelf video generation models which enables spatio-temporal control. We also propose an evaluation benchmark for the interactive video generation task. Through extensive qualitative and quantitative evaluation, we establish that Peekaboo enables control video generation and even obtains a gain of upto 3.8x in mIoU over baseline models.
PDF121December 15, 2024