ChatPaper.aiChatPaper

通過物體外觀和上下文實現精細控制的影片生成

Fine-grained Controllable Video Generation via Object Appearance and Context

December 5, 2023
作者: Hsin-Ping Huang, Yu-Chuan Su, Deqing Sun, Lu Jiang, Xuhui Jia, Yukun Zhu, Ming-Hsuan Yang
cs.AI

摘要

文字轉視頻生成已顯示出令人鼓舞的結果。然而,僅接受自然語言作為輸入,用戶通常難以提供詳細信息以精確控制模型的輸出。在這項工作中,我們提出了細粒度可控視頻生成(FACTOR)以實現詳細控制。具體而言,FACTOR旨在控制物體的外觀和上下文,包括它們的位置和類別,與文本提示一起。為了實現詳細控制,我們提出了一個統一的框架,將控制信號聯合注入現有的文字轉視頻模型中。我們的模型包括聯合編碼器和自適應交叉注意力層。通過優化編碼器和插入層,我們使模型適應生成與文本提示和細粒度控制對齊的視頻。與依賴於密集控制信號(如邊緣地圖)的現有方法相比,我們提供了一個更直觀和用戶友好的界面,以允許對象級的細粒度控制。我們的方法實現了對象外觀的可控性,無需微調,從而減少了用戶的每個主題優化工作。對標準基準數據集和用戶提供的輸入進行了大量實驗,驗證了我們的模型在可控性指標上相比競爭基線實現了70%的改善。
English
Text-to-video generation has shown promising results. However, by taking only natural languages as input, users often face difficulties in providing detailed information to precisely control the model's output. In this work, we propose fine-grained controllable video generation (FACTOR) to achieve detailed control. Specifically, FACTOR aims to control objects' appearances and context, including their location and category, in conjunction with the text prompt. To achieve detailed control, we propose a unified framework to jointly inject control signals into the existing text-to-video model. Our model consists of a joint encoder and adaptive cross-attention layers. By optimizing the encoder and the inserted layer, we adapt the model to generate videos that are aligned with both text prompts and fine-grained control. Compared to existing methods relying on dense control signals such as edge maps, we provide a more intuitive and user-friendly interface to allow object-level fine-grained control. Our method achieves controllability of object appearances without finetuning, which reduces the per-subject optimization efforts for the users. Extensive experiments on standard benchmark datasets and user-provided inputs validate that our model obtains a 70% improvement in controllability metrics over competitive baselines.
PDF130December 15, 2024