ChatPaper.aiChatPaper

EVF-SAM:早期視覺語言融合,用於文本提示的分段任務模型

EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model

June 28, 2024
作者: Yuxuan Zhang, Tianheng Cheng, Rui Hu, ei Liu, Heng Liu, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang
cs.AI

摘要

Segment Anything Model(SAM)因其卓越的互動式分割能力和視覺提示而受到廣泛關注,但尚未深入探討文本提示。本文實證探討了文本提示編碼器(例如CLIP或LLM)在適應SAM進行指涉表達分割方面的應用,並介紹了基於早期視覺-語言融合的SAM(EVF-SAM)。EVF-SAM是一種簡單而有效的指涉分割方法,利用多模式提示(即圖像和文本),包括一個預訓練的視覺-語言模型用於生成指涉提示,以及一個SAM模型用於分割。令人驚訝的是,我們觀察到:(1)多模式提示和(2)具有早期融合的視覺-語言模型(例如BEIT-3)有助於準確引導SAM進行指涉分割。我們的實驗表明,基於BEIT-3的提出的EVF-SAM在RefCOCO/+/g上實現了最先進的指涉表達分割性能,並展示了用早期視覺-語言融合引導SAM的優越性。此外,基於132億參數的提出的EVF-SAM實現了顯著更高的性能,同時與基於大型多模式模型的先前SAM方法相比,減少了近82%的參數。
English
Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.

Summary

AI-Generated Summary

PDF103November 29, 2024