ChatPaper.aiChatPaper

SAM 2:在圖像和視頻中分割任何物件

SAM 2: Segment Anything in Images and Videos

August 1, 2024
作者: Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollár, Christoph Feichtenhofer
cs.AI

摘要

我們提出了Segment Anything Model 2 (SAM 2),這是一個針對圖像和視頻中可提示的視覺分割問題的基礎模型。我們建立了一個數據引擎,通過用戶互動來改進模型和數據,以收集迄今為止最大的視頻分割數據集。我們的模型是一個具有流式記憶功能的簡單Transformer架構,可用於實時視頻處理。在我們的數據上訓練的SAM 2在各種任務中表現出色。在視頻分割中,我們觀察到比以往方法更準確的結果,並且使用的互動次數少了3倍。在圖像分割中,我們的模型比Segment Anything Model (SAM)更準確,速度快了6倍。我們相信我們的數據、模型和見解將成為視頻分割和相關感知任務的一個重要里程碑。我們將發布我們模型的一個版本、數據集和一個互動演示。
English
We present Segment Anything Model 2 (SAM 2), a foundation model towards solving promptable visual segmentation in images and videos. We build a data engine, which improves model and data via user interaction, to collect the largest video segmentation dataset to date. Our model is a simple transformer architecture with streaming memory for real-time video processing. SAM 2 trained on our data provides strong performance across a wide range of tasks. In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM). We believe that our data, model, and insights will serve as a significant milestone for video segmentation and related perception tasks. We are releasing a version of our model, the dataset and an interactive demo.

Summary

AI-Generated Summary

PDF1155November 28, 2024