EVF-SAM:用于文本提示的早期视觉-语言融合,用于分割任何事物模型
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
June 28, 2024
作者: Yuxuan Zhang, Tianheng Cheng, Rui Hu, ei Liu, Heng Liu, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang
cs.AI
摘要
Segment Anything Model(SAM)因其出色的交互式分割能力和视觉提示而受到广泛关注,但对文本提示的进一步探索尚不足。本文在经验上研究了文本提示编码器(例如CLIP或LLM)在调整SAM以用于指称表达分割方面的潜力,并引入了基于早期视觉-语言融合的SAM(EVF-SAM)。EVF-SAM是一种简单而有效的指称分割方法,利用多模态提示(即图像和文本),包括一个预训练的视觉-语言模型用于生成指称提示,以及一个用于分割的SAM模型。令人惊讶的是,我们观察到:(1)多模态提示和(2)具有早期融合的视觉-语言模型(例如BEIT-3)有助于准确提示SAM进行指称分割。我们的实验表明,基于BEIT-3的提出的EVF-SAM在RefCOCO/+/g上实现了最先进的指称表达分割性能,并展示了用早期视觉-语言融合提示SAM的优越性。此外,提出的具有13.2亿参数的EVF-SAM相比基于大型多模态模型的先前SAM方法,性能显著更高,同时减少了近82%的参数。
English
Segment Anything Model (SAM) has attracted widespread attention for its
superior interactive segmentation capabilities with visual prompts while
lacking further exploration of text prompts. In this paper, we empirically
investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting
SAM for referring expression segmentation and introduce the Early
Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective
referring segmentation method which exploits multimodal prompts (i.e., image
and text) and comprises a pre-trained vision-language model to generate
referring prompts and a SAM model for segmentation. Surprisingly, we observe
that: (1) multimodal prompts and (2) vision-language models with early fusion
(e.g., BEIT-3) are beneficial for prompting SAM for accurate referring
segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3
can obtain state-of-the-art performance on RefCOCO/+/g for referring expression
segmentation and demonstrate the superiority of prompting SAM with early
vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters
achieves remarkably higher performance while reducing nearly 82% of parameters
compared to previous SAM methods based on large multimodal models.