ChatPaper.aiChatPaper

CamCo:可控相機的三維一致影像到視頻生成

CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation

June 4, 2024
作者: Dejia Xu, Weili Nie, Chao Liu, Sifei Liu, Jan Kautz, Zhangyang Wang, Arash Vahdat
cs.AI

摘要

最近,視頻擴散模型已成為表現豐富的生成工具,可供一般用戶輕鬆創建高質量視頻內容。然而,這些模型通常無法精確控制視頻生成的相機姿勢,限制了電影語言和用戶控制的表現。為解決此問題,我們引入了CamCo,該系統允許對圖像到視頻生成進行精細的相機姿勢控制。我們使用 Pl\"ucker 坐標精確地對預先訓練的圖像到視頻生成器進行相機姿勢輸入的參數化。為增強所生成視頻的三維一致性,我們在每個注意力塊中集成了一個射影關注模塊,強制實施對特徵圖的射影約束。此外,我們通過結構從運動算法估計的相機姿勢在真實世界視頻上對 CamCo 進行微調,以更好地合成物體運動。我們的實驗表明,與先前模型相比,CamCo 顯著提高了三維一致性和相機控制能力,同時有效地生成可信的物體運動。項目頁面:https://ir1d.github.io/CamCo/
English
Recently video diffusion models have emerged as expressive generative tools for high-quality video content creation readily available to general users. However, these models often do not offer precise control over camera poses for video generation, limiting the expression of cinematic language and user control. To address this issue, we introduce CamCo, which allows fine-grained Camera pose Control for image-to-video generation. We equip a pre-trained image-to-video generator with accurately parameterized camera pose input using Pl\"ucker coordinates. To enhance 3D consistency in the videos produced, we integrate an epipolar attention module in each attention block that enforces epipolar constraints to the feature maps. Additionally, we fine-tune CamCo on real-world videos with camera poses estimated through structure-from-motion algorithms to better synthesize object motion. Our experiments show that CamCo significantly improves 3D consistency and camera control capabilities compared to previous models while effectively generating plausible object motion. Project page: https://ir1d.github.io/CamCo/
PDF104December 12, 2024