ChatPaper.aiChatPaper

VideoGrain:調控時空注意力以實現多粒度影片編輯

VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing

February 24, 2025
作者: Xiangpeng Yang, Linchao Zhu, Hehe Fan, Yi Yang
cs.AI

摘要

近期擴散模型的進展顯著提升了視頻生成與編輯的能力。然而,多粒度視頻編輯——涵蓋類別層次、實例層次及部分層次的修改——仍是一項艱巨的挑戰。多粒度編輯的主要難題包括文本到區域控制的語義對齊失準以及擴散模型內部的特徵耦合問題。為解決這些難題,我們提出了VideoGrain,一種零樣本方法,通過調節時空(交叉與自)注意力機制來實現對視頻內容的細粒度控制。我們通過在交叉注意力中增強每個局部提示對應空間解耦區域的關注,同時最小化與無關區域的交互,從而提升了文本到區域的控制。此外,我們通過在自注意力中增加區域內部的感知並減少區域間的干擾,改善了特徵分離。大量實驗證明,我們的方法在現實場景中達到了最先進的性能。我們的代碼、數據及演示可在https://knightyxp.github.io/VideoGrain_project_page/獲取。
English
Recent advancements in diffusion models have significantly improved video generation and editing capabilities. However, multi-grained video editing, which encompasses class-level, instance-level, and part-level modifications, remains a formidable challenge. The major difficulties in multi-grained editing include semantic misalignment of text-to-region control and feature coupling within the diffusion model. To address these difficulties, we present VideoGrain, a zero-shot approach that modulates space-time (cross- and self-) attention mechanisms to achieve fine-grained control over video content. We enhance text-to-region control by amplifying each local prompt's attention to its corresponding spatial-disentangled region while minimizing interactions with irrelevant areas in cross-attention. Additionally, we improve feature separation by increasing intra-region awareness and reducing inter-region interference in self-attention. Extensive experiments demonstrate our method achieves state-of-the-art performance in real-world scenarios. Our code, data, and demos are available at https://knightyxp.github.io/VideoGrain_project_page/

Summary

AI-Generated Summary

PDF795February 25, 2025