TrailBlazer:基于扩散的视频生成的轨迹控制
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
December 31, 2023
作者: Wan-Duo Kurt Ma, J. P. Lewis, W. Bastiaan Kleijn
cs.AI
摘要
在最近的文本到视频(T2V)生成方法中,实现合成视频的可控性通常是一个挑战。通常,这个问题通过提供低级别的逐帧指导,如边缘图、深度图或现有视频以供修改来解决。然而,获取这种指导的过程可能需要大量人力。本文侧重于通过使用简单的边界框来增强视频合成中的可控性,而无需进行神经网络训练、微调、推理时优化或使用预先存在的视频。我们的算法TrailBlazer建立在预训练的(T2V)模型之上,易于实现。主题通过提出的空间和时间注意力图编辑由边界框引导。此外,我们引入了关键帧概念,允许主题轨迹和整体外观由移动边界框和相应提示引导,而无需提供详细的蒙版。该方法高效,与基础预训练模型相比,额外计算几乎可以忽略不计。尽管边界框引导的简单性,但结果运动出奇地自然,出现的效果包括透视和随着边界框大小增加朝虚拟摄像机移动。
English
Within recent approaches to text-to-video (T2V) generation, achieving
controllability in the synthesized video is often a challenge. Typically, this
issue is addressed by providing low-level per-frame guidance in the form of
edge maps, depth maps, or an existing video to be altered. However, the process
of obtaining such guidance can be labor-intensive. This paper focuses on
enhancing controllability in video synthesis by employing straightforward
bounding boxes to guide the subject in various ways, all without the need for
neural network training, finetuning, optimization at inference time, or the use
of pre-existing videos. Our algorithm, TrailBlazer, is constructed upon a
pre-trained (T2V) model, and easy to implement. The subject is directed by a
bounding box through the proposed spatial and temporal attention map editing.
Moreover, we introduce the concept of keyframing, allowing the subject
trajectory and overall appearance to be guided by both a moving bounding box
and corresponding prompts, without the need to provide a detailed mask. The
method is efficient, with negligible additional computation relative to the
underlying pre-trained model. Despite the simplicity of the bounding box
guidance, the resulting motion is surprisingly natural, with emergent effects
including perspective and movement toward the virtual camera as the box size
increases.