GenXD:生成任意3D和4D場景
GenXD: Generating Any 3D and 4D Scenes
November 4, 2024
作者: Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, Lijuan Wang
cs.AI
摘要
近年來2D視覺生成的發展取得了顯著成功。然而,在真實應用中,3D和4D生成仍然具有挑戰性,這是由於缺乏大規模的4D數據和有效的模型設計。在本文中,我們提出通過利用日常生活中常見的攝像機和物體運動來共同研究一般的3D和4D生成。由於社區中缺乏真實世界的4D數據,我們首先提出了一個數據整理流程,從視頻中獲取攝像機姿勢和物體運動強度。基於這個流程,我們介紹了一個大規模的真實世界4D場景數據集:CamVid-30K。通過利用所有的3D和4D數據,我們開發了我們的框架GenXD,它使我們能夠生成任何3D或4D場景。我們提出了多視圖-時間模塊,可以將攝像機和物體運動解耦,從而無縫地從3D和4D數據中學習。此外,GenXD採用了遮罩潛在條件來支持各種條件視圖。GenXD可以生成遵循攝像機軌跡的視頻,以及可以轉換為3D表示的一致的3D視圖。我們在各種真實世界和合成數據集上進行了廣泛的評估,展示了GenXD在3D和4D生成方面相對於以往方法的有效性和多功能性。
English
Recent developments in 2D visual generation have been remarkably successful.
However, 3D and 4D generation remain challenging in real-world applications due
to the lack of large-scale 4D data and effective model design. In this paper,
we propose to jointly investigate general 3D and 4D generation by leveraging
camera and object movements commonly observed in daily life. Due to the lack of
real-world 4D data in the community, we first propose a data curation pipeline
to obtain camera poses and object motion strength from videos. Based on this
pipeline, we introduce a large-scale real-world 4D scene dataset: CamVid-30K.
By leveraging all the 3D and 4D data, we develop our framework, GenXD, which
allows us to produce any 3D or 4D scene. We propose multiview-temporal modules,
which disentangle camera and object movements, to seamlessly learn from both 3D
and 4D data. Additionally, GenXD employs masked latent conditions to support a
variety of conditioning views. GenXD can generate videos that follow the camera
trajectory as well as consistent 3D views that can be lifted into 3D
representations. We perform extensive evaluations across various real-world and
synthetic datasets, demonstrating GenXD's effectiveness and versatility
compared to previous methods in 3D and 4D generation.