ChatPaper.aiChatPaper

通过物体外观和上下文实现细粒度可控视频生成

Fine-grained Controllable Video Generation via Object Appearance and Context

December 5, 2023
作者: Hsin-Ping Huang, Yu-Chuan Su, Deqing Sun, Lu Jiang, Xuhui Jia, Yukun Zhu, Ming-Hsuan Yang
cs.AI

摘要

文本到视频生成已经显示出了令人充满希望的结果。然而,仅接受自然语言作为输入,用户通常难以提供详细信息以精确控制模型的输出。在这项工作中,我们提出了细粒度可控视频生成(FACTOR)来实现详细的控制。具体而言,FACTOR旨在控制对象的外观和上下文,包括它们的位置和类别,结合文本提示。为了实现详细的控制,我们提出了一个统一的框架,将控制信号联合注入现有的文本到视频模型中。我们的模型包括一个联合编码器和自适应交叉注意力层。通过优化编码器和插入的层,我们使模型能够生成与文本提示和细粒度控制对齐的视频。与依赖密集控制信号(如边缘映射)的现有方法相比,我们提供了一个更直观和用户友好的界面,允许对象级的细粒度控制。我们的方法实现了对象外观的可控性,无需微调,从而减少了用户的每个主题优化工作。对标准基准数据集和用户提供的输入进行了大量实验,验证了我们的模型在可控性指标上比竞争基线提高了70%。
English
Text-to-video generation has shown promising results. However, by taking only natural languages as input, users often face difficulties in providing detailed information to precisely control the model's output. In this work, we propose fine-grained controllable video generation (FACTOR) to achieve detailed control. Specifically, FACTOR aims to control objects' appearances and context, including their location and category, in conjunction with the text prompt. To achieve detailed control, we propose a unified framework to jointly inject control signals into the existing text-to-video model. Our model consists of a joint encoder and adaptive cross-attention layers. By optimizing the encoder and the inserted layer, we adapt the model to generate videos that are aligned with both text prompts and fine-grained control. Compared to existing methods relying on dense control signals such as edge maps, we provide a more intuitive and user-friendly interface to allow object-level fine-grained control. Our method achieves controllability of object appearances without finetuning, which reduces the per-subject optimization efforts for the users. Extensive experiments on standard benchmark datasets and user-provided inputs validate that our model obtains a 70% improvement in controllability metrics over competitive baselines.
PDF130December 15, 2024