SegviGen:重新利用三维生成模型实现部件分割
SegviGen: Repurposing 3D Generative Model for Part Segmentation
March 17, 2026
作者: Lin Li, Haoran Feng, Zehuan Huang, Haohua Chen, Wenbo Nie, Shaohua Hou, Keqing Fan, Pan Hu, Sheng Wang, Buyu Li, Lu Sheng
cs.AI
摘要
我们提出SegviGen框架,该框架通过重构原生3D生成模型实现3D部件分割。现有技术方案要么通过蒸馏或多视角掩码聚合将强2D先验提升至3D空间,但常受跨视角不一致性和边界模糊问题困扰;要么探索原生3D判别式分割方法,这类方法通常需要大规模标注3D数据及大量训练资源。相较之下,SegviGen利用预训练3D生成模型中编码的结构化先验,通过差异化部件着色机制实现分割,建立了一种新颖高效的部件分割框架。具体而言,SegviGen对3D资源进行编码,并在几何对齐重建的活跃体素上预测部件指示颜色。该框架统一支持交互式部件分割、完整分割以及带2D引导的完整分割三种模式。大量实验表明,SegviGen在交互式部件分割任务上较现有最优技术提升40%,在完整分割任务上提升15%,且仅需0.32%的标注训练数据。这证明预训练的3D生成先验可有效迁移至3D部件分割任务,在有限监督条件下实现强劲性能。项目详情请访问:https://fenghora.github.io/SegviGen-Page/。
English
We introduce SegviGen, a framework that repurposes native 3D generative models for 3D part segmentation. Existing pipelines either lift strong 2D priors into 3D via distillation or multi-view mask aggregation, often suffering from cross-view inconsistency and blurred boundaries, or explore native 3D discriminative segmentation, which typically requires large-scale annotated 3D data and substantial training resources. In contrast, SegviGen leverages the structured priors encoded in pretrained 3D generative model to induce segmentation through distinctive part colorization, establishing a novel and efficient framework for part segmentation. Specifically, SegviGen encodes a 3D asset and predicts part-indicative colors on active voxels of a geometry-aligned reconstruction. It supports interactive part segmentation, full segmentation, and full segmentation with 2D guidance in a unified framework. Extensive experiments show that SegviGen improves over the prior state of the art by 40% on interactive part segmentation and by 15% on full segmentation, while using only 0.32% of the labeled training data. It demonstrates that pretrained 3D generative priors transfer effectively to 3D part segmentation, enabling strong performance with limited supervision. See our project page at https://fenghora.github.io/SegviGen-Page/.