透過雙體積封裝實現高效的部分層級3D物體生成
Efficient Part-level 3D Object Generation via Dual Volume Packing
June 11, 2025
作者: Jiaxiang Tang, Ruijie Lu, Zhaoshuo Li, Zekun Hao, Xuan Li, Fangyin Wei, Shuran Song, Gang Zeng, Ming-Yu Liu, Tsung-Yi Lin
cs.AI
摘要
近期在三維物體生成領域的進展大幅提升了生成品質與效率。然而,現有方法大多生成的是所有部件融合在一起的單一網格,這限制了對個別部件進行編輯或操作的靈活性。一個關鍵挑戰在於不同物體可能具有數量不一的部件。為解決此問題,我們提出了一種新的端到端框架,用於部件級別的三維物體生成。基於單張輸入圖像,我們的方法能夠生成具有任意數量完整且語義明確部件的高品質三維物體。我們引入了一種雙體積打包策略,該策略將所有部件組織到兩個互補的體積中,從而能夠創建完整且交錯的部件,這些部件最終組裝成完整的物體。實驗結果表明,與先前的基於圖像的部件級生成方法相比,我們的模型在質量、多樣性和泛化能力上均取得了更好的表現。
English
Recent progress in 3D object generation has greatly improved both the quality
and efficiency. However, most existing methods generate a single mesh with all
parts fused together, which limits the ability to edit or manipulate individual
parts. A key challenge is that different objects may have a varying number of
parts. To address this, we propose a new end-to-end framework for part-level 3D
object generation. Given a single input image, our method generates
high-quality 3D objects with an arbitrary number of complete and semantically
meaningful parts. We introduce a dual volume packing strategy that organizes
all parts into two complementary volumes, allowing for the creation of complete
and interleaved parts that assemble into the final object. Experiments show
that our model achieves better quality, diversity, and generalization than
previous image-based part-level generation methods.