基于视觉-语言模型的广义少样本3D点云分割
Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language Model
March 20, 2025
作者: Zhaochong An, Guolei Sun, Yun Liu, Runjia Li, Junlin Han, Ender Konukoglu, Serge Belongie
cs.AI
摘要
广义少样本3D点云分割(GFS-PCS)旨在使模型能够利用少量支持样本适应新类别,同时保持对基础类别的分割能力。现有的GFS-PCS方法通过支持或查询特征的交互增强原型,但仍受限于少样本样本带来的稀疏知识。与此同时,3D视觉语言模型(3D VLMs)能够泛化至开放世界中的新类别,蕴含丰富但嘈杂的新类别知识。本研究中,我们提出了一种GFS-PCS框架,名为GFS-VL,它将3D VLMs提供的密集但带有噪声的伪标签与精确却稀疏的少样本样本相结合,以最大化两者的优势。具体而言,我们提出了一种基于原型引导的伪标签选择机制,用于过滤低质量区域,随后采用一种自适应填充策略,该策略结合了伪标签上下文与少样本样本的知识,以自适应地标注那些被过滤且未标记的区域。此外,我们设计了一种新颖的基础混合策略,将少样本样本嵌入训练场景中,保留关键上下文以提升新类别的学习效果。鉴于当前GFS-PCS基准测试中多样性的不足,我们还引入了两个包含多样化新类别的挑战性基准,用于全面的泛化能力评估。实验验证了我们的框架在不同模型和数据集上的有效性。我们的方法与基准测试为推进GFS-PCS在现实世界中的应用奠定了坚实基础。代码已发布于https://github.com/ZhaochongAn/GFS-VL。
English
Generalized few-shot 3D point cloud segmentation (GFS-PCS) adapts models to
new classes with few support samples while retaining base class segmentation.
Existing GFS-PCS methods enhance prototypes via interacting with support or
query features but remain limited by sparse knowledge from few-shot samples.
Meanwhile, 3D vision-language models (3D VLMs), generalizing across open-world
novel classes, contain rich but noisy novel class knowledge. In this work, we
introduce a GFS-PCS framework that synergizes dense but noisy pseudo-labels
from 3D VLMs with precise yet sparse few-shot samples to maximize the strengths
of both, named GFS-VL. Specifically, we present a prototype-guided pseudo-label
selection to filter low-quality regions, followed by an adaptive infilling
strategy that combines knowledge from pseudo-label contexts and few-shot
samples to adaptively label the filtered, unlabeled areas. Additionally, we
design a novel-base mix strategy to embed few-shot samples into training
scenes, preserving essential context for improved novel class learning.
Moreover, recognizing the limited diversity in current GFS-PCS benchmarks, we
introduce two challenging benchmarks with diverse novel classes for
comprehensive generalization evaluation. Experiments validate the effectiveness
of our framework across models and datasets. Our approach and benchmarks
provide a solid foundation for advancing GFS-PCS in the real world. The code is
at https://github.com/ZhaochongAn/GFS-VLSummary
AI-Generated Summary