推動自回歸模型在容量和可擴展性上進行3D形狀生成
Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability
February 19, 2024
作者: Xuelin Qian, Yu Wang, Simian Luo, Yinda Zhang, Ying Tai, Zhenyu Zhang, Chengjie Wang, Xiangyang Xue, Bo Zhao, Tiejun Huang, Yunsheng Wu, Yanwei Fu
cs.AI
摘要
自回歸模型在二維影像生成方面取得了令人印象深刻的成果,通過對網格空間中的聯合分布進行建模。在本文中,我們將自回歸模型擴展到三維領域,通過同時提高自回歸模型的容量和可擴展性,來尋求更強大的三維形狀生成能力。首先,我們利用一組公開可用的三維數據集來促進大規模模型的訓練。該數據集包含約 900,000 個物體的全面收集,具有網格、點、體素、渲染圖像和文本標題的多種屬性。這個多標記數據集被稱為 Objaverse-Mix,使我們的模型能夠從各種物體變化中學習。然而,直接應用三維自回歸遇到體素網格的高計算需求和沿網格維度的自回歸順序模糊等關鍵挑戰,導致三維形狀的質量較差。因此,我們提出了一個名為 Argus3D 的新框架,以提高容量。具體而言,我們的方法引入了基於潛在向量的離散表示學習,而不是基於體素網格,這不僅降低了計算成本,還通過以更易處理的順序學習聯合分布來保留基本幾何細節。條件生成的容量可以通過將各種條件輸入簡單地連接到潛在向量中來實現,例如點雲、類別、圖像和文本。此外,由於我們模型架構的簡單性,我們自然地將我們的方法擴展到一個具有驚人 36 億參數的更大模型,進一步提高了多功能三維生成的質量。對四個生成任務的大量實驗表明,Argus3D 能夠在多個類別中合成多樣且忠實的形狀,實現了卓越的性能。
English
Auto-regressive models have achieved impressive results in 2D image
generation by modeling joint distributions in grid space. In this paper, we
extend auto-regressive models to 3D domains, and seek a stronger ability of 3D
shape generation by improving auto-regressive models at capacity and
scalability simultaneously. Firstly, we leverage an ensemble of publicly
available 3D datasets to facilitate the training of large-scale models. It
consists of a comprehensive collection of approximately 900,000 objects, with
multiple properties of meshes, points, voxels, rendered images, and text
captions. This diverse labeled dataset, termed Objaverse-Mix, empowers our
model to learn from a wide range of object variations. However, directly
applying 3D auto-regression encounters critical challenges of high
computational demands on volumetric grids and ambiguous auto-regressive order
along grid dimensions, resulting in inferior quality of 3D shapes. To this end,
we then present a novel framework Argus3D in terms of capacity. Concretely, our
approach introduces discrete representation learning based on a latent vector
instead of volumetric grids, which not only reduces computational costs but
also preserves essential geometric details by learning the joint distributions
in a more tractable order. The capacity of conditional generation can thus be
realized by simply concatenating various conditioning inputs to the latent
vector, such as point clouds, categories, images, and texts. In addition,
thanks to the simplicity of our model architecture, we naturally scale up our
approach to a larger model with an impressive 3.6 billion parameters, further
enhancing the quality of versatile 3D generation. Extensive experiments on four
generation tasks demonstrate that Argus3D can synthesize diverse and faithful
shapes across multiple categories, achieving remarkable performance.