CINEMA:基於多模態大語言模型引導的連貫多主體視頻生成
CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance
March 13, 2025
作者: Yufan Deng, Xun Guo, Yizhi Wang, Jacob Zhiyuan Fang, Angtian Wang, Shenghai Yuan, Yiding Yang, Bo Liu, Haibin Huang, Chongyang Ma
cs.AI
摘要
隨著深度生成模型,特別是擴散模型的出現,影片生成技術取得了顯著進展。儘管現有方法在從文字提示或單一圖像生成高質量影片方面表現出色,但個性化的多主體影片生成仍是一個尚未被充分探索的挑戰。這項任務涉及合成包含多個不同主體的影片,每個主體由獨立的參考圖像定義,同時確保時間和空間的一致性。當前的方法主要依賴於將主體圖像映射到文字提示中的關鍵詞,這引入了模糊性並限制了其有效建模主體關係的能力。在本文中,我們提出了CINEMA,這是一個利用多模態大語言模型(MLLM)進行一致性多主體影片生成的新框架。我們的方法消除了主體圖像與文字實體之間明確對應的需求,減少了模糊性並降低了註釋工作量。通過利用MLLM來解釋主體關係,我們的方法促進了可擴展性,使得能夠使用大規模且多樣化的數據集進行訓練。此外,我們的框架可以根據不同數量的主體進行條件設置,為個性化內容創作提供了更大的靈活性。通過廣泛的評估,我們展示了我們的方法在提升主體一致性和整體影片連貫性方面的顯著改進,為故事敘述、互動媒體和個性化影片生成等先進應用鋪平了道路。
English
Video generation has witnessed remarkable progress with the advent of deep
generative models, particularly diffusion models. While existing methods excel
in generating high-quality videos from text prompts or single images,
personalized multi-subject video generation remains a largely unexplored
challenge. This task involves synthesizing videos that incorporate multiple
distinct subjects, each defined by separate reference images, while ensuring
temporal and spatial consistency. Current approaches primarily rely on mapping
subject images to keywords in text prompts, which introduces ambiguity and
limits their ability to model subject relationships effectively. In this paper,
we propose CINEMA, a novel framework for coherent multi-subject video
generation by leveraging Multimodal Large Language Model (MLLM). Our approach
eliminates the need for explicit correspondences between subject images and
text entities, mitigating ambiguity and reducing annotation effort. By
leveraging MLLM to interpret subject relationships, our method facilitates
scalability, enabling the use of large and diverse datasets for training.
Furthermore, our framework can be conditioned on varying numbers of subjects,
offering greater flexibility in personalized content creation. Through
extensive evaluations, we demonstrate that our approach significantly improves
subject consistency, and overall video coherence, paving the way for advanced
applications in storytelling, interactive media, and personalized video
generation.Summary
AI-Generated Summary