语义SAM:在任意粒度上分割和识别任何内容
Semantic-SAM: Segment and Recognize Anything at Any Granularity
July 10, 2023
作者: Feng Li, Hao Zhang, Peize Sun, Xueyan Zou, Shilong Liu, Jianwei Yang, Chunyuan Li, Lei Zhang, Jianfeng Gao
cs.AI
摘要
本文介绍了Semantic-SAM,这是一个通用的图像分割模型,可以实现在任意所需粒度上对任何物体进行分割和识别。我们的模型具有两个关键优势:语义感知和粒度丰富性。为了实现语义感知,我们整合了跨三个粒度的多个数据集,并引入了对象和部件的解耦分类。这使得我们的模型能够捕获丰富的语义信息。对于多粒度能力,我们在训练过程中提出了一种多选学习方案,使每次点击能够生成对应于多个地面真实标记的多个级别的蒙版。值得注意的是,这项工作是首次尝试在SA-1B、通用和部分分割数据集上联合训练模型。实验结果和可视化展示表明,我们的模型成功实现了语义感知和粒度丰富性。此外,将SA-1B训练与其他分割任务(如全景和部分分割)相结合,可以提高性能。我们将提供代码和演示以供进一步探索和评估。
English
In this paper, we introduce Semantic-SAM, a universal image segmentation
model to enable segment and recognize anything at any desired granularity. Our
model offers two key advantages: semantic-awareness and granularity-abundance.
To achieve semantic-awareness, we consolidate multiple datasets across three
granularities and introduce decoupled classification for objects and parts.
This allows our model to capture rich semantic information. For the
multi-granularity capability, we propose a multi-choice learning scheme during
training, enabling each click to generate masks at multiple levels that
correspond to multiple ground-truth masks. Notably, this work represents the
first attempt to jointly train a model on SA-1B, generic, and part segmentation
datasets. Experimental results and visualizations demonstrate that our model
successfully achieves semantic-awareness and granularity-abundance.
Furthermore, combining SA-1B training with other segmentation tasks, such as
panoptic and part segmentation, leads to performance improvements. We will
provide code and a demo for further exploration and evaluation.