語義-SAM:以任意粒度分割和識別任何物件
Semantic-SAM: Segment and Recognize Anything at Any Granularity
July 10, 2023
作者: Feng Li, Hao Zhang, Peize Sun, Xueyan Zou, Shilong Liu, Jianwei Yang, Chunyuan Li, Lei Zhang, Jianfeng Gao
cs.AI
摘要
本文介紹了Semantic-SAM,一個通用的影像分割模型,可讓使用者以任何所需的粒度進行分割和識別。我們的模型具有兩個關鍵優勢:語義感知和粒度豐富。為了實現語義感知,我們整合了三個粒度的多個數據集,並引入了對象和部分的解耦分類。這使得我們的模型能夠捕捉豐富的語義信息。對於多粒度能力,我們提出了一種多選擇學習方案,在訓練期間使每次點擊能夠生成對應於多個地面真實遮罩的多個級別的遮罩。值得注意的是,這項工作代表了首次嘗試在SA-1B、通用和部分分割數據集上聯合訓練模型。實驗結果和可視化展示表明,我們的模型成功實現了語義感知和粒度豐富。此外,將SA-1B訓練與其他分割任務(如全景和部分分割)相結合,將帶來性能改進。我們將提供代碼和演示以進一步探索和評估。
English
In this paper, we introduce Semantic-SAM, a universal image segmentation
model to enable segment and recognize anything at any desired granularity. Our
model offers two key advantages: semantic-awareness and granularity-abundance.
To achieve semantic-awareness, we consolidate multiple datasets across three
granularities and introduce decoupled classification for objects and parts.
This allows our model to capture rich semantic information. For the
multi-granularity capability, we propose a multi-choice learning scheme during
training, enabling each click to generate masks at multiple levels that
correspond to multiple ground-truth masks. Notably, this work represents the
first attempt to jointly train a model on SA-1B, generic, and part segmentation
datasets. Experimental results and visualizations demonstrate that our model
successfully achieves semantic-awareness and granularity-abundance.
Furthermore, combining SA-1B training with other segmentation tasks, such as
panoptic and part segmentation, leads to performance improvements. We will
provide code and a demo for further exploration and evaluation.