通过双体积打包实现高效部件级3D物体生成
Efficient Part-level 3D Object Generation via Dual Volume Packing
June 11, 2025
作者: Jiaxiang Tang, Ruijie Lu, Zhaoshuo Li, Zekun Hao, Xuan Li, Fangyin Wei, Shuran Song, Gang Zeng, Ming-Yu Liu, Tsung-Yi Lin
cs.AI
摘要
近期,3D物体生成领域取得了显著进展,大幅提升了生成质量和效率。然而,现有方法大多生成的是所有部件融合在一起的单一网格,这限制了对单个部件进行编辑或操作的能力。一个关键挑战在于,不同物体可能包含数量不等的部件。为解决这一问题,我们提出了一种新的端到端框架,用于部件级别的3D物体生成。给定单一输入图像,我们的方法能够生成具有任意数量完整且语义明确部件的高质量3D物体。我们引入了一种双体积打包策略,将所有部件组织到两个互补的体积中,从而创建出完整且相互交织的部件,最终组装成完整物体。实验结果表明,与以往基于图像的部件级生成方法相比,我们的模型在质量、多样性和泛化能力上均表现更优。
English
Recent progress in 3D object generation has greatly improved both the quality
and efficiency. However, most existing methods generate a single mesh with all
parts fused together, which limits the ability to edit or manipulate individual
parts. A key challenge is that different objects may have a varying number of
parts. To address this, we propose a new end-to-end framework for part-level 3D
object generation. Given a single input image, our method generates
high-quality 3D objects with an arbitrary number of complete and semantically
meaningful parts. We introduce a dual volume packing strategy that organizes
all parts into two complementary volumes, allowing for the creation of complete
and interleaved parts that assemble into the final object. Experiments show
that our model achieves better quality, diversity, and generalization than
previous image-based part-level generation methods.