ChatPaper.aiChatPaper

ARGenSeg:基于自回归图像生成模型的图像分割技术

ARGenSeg: Image Segmentation with Autoregressive Image Generation Model

October 23, 2025
作者: Xiaolong Wang, Lixiang Ru, Ziyuan Huang, Kaixiang Ji, Dandan Zheng, Jingdong Chen, Jun Zhou
cs.AI

摘要

我們提出了一種基於自迴歸生成的新型圖像分割範式(ARGenSeg),在統一框架內實現多模態理解與像素級感知。現有將圖像分割整合到多模態大語言模型(MLLM)的研究通常採用邊界點表徵或專用分割頭,這些方法依賴於輸入任務特定解碼器的離散表徵或語義提示,限制了MLLM捕捉細粒度視覺細節的能力。為解決這些問題,我們引入了一種基於圖像生成的MLLM分割框架,能自然生成目標對象的稠密掩碼。我們利用MLLM輸出視覺標記,並通過通用VQ-VAE解碼為圖像,使分割完全依賴於MLLM的像素級理解。為降低推理延遲,我們採用下一尺度預測策略並行生成所需視覺標記。大量實驗表明,本方法在多個分割數據集上超越先前最先進方案,推理速度顯著提升,同時保持強大的理解能力。
English
We propose a novel AutoRegressive Generation-based paradigm for image Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level perception within a unified framework. Prior works integrating image segmentation into multimodal large language models (MLLMs) typically employ either boundary points representation or dedicated segmentation heads. These methods rely on discrete representations or semantic prompts fed into task-specific decoders, which limits the ability of the MLLM to capture fine-grained visual details. To address these challenges, we introduce a segmentation framework for MLLM based on image generation, which naturally produces dense masks for target objects. We leverage MLLM to output visual tokens and detokenize them into images using an universal VQ-VAE, making the segmentation fully dependent on the pixel-level understanding of the MLLM. To reduce inference latency, we employ a next-scale-prediction strategy to generate required visual tokens in parallel. Extensive experiments demonstrate that our method surpasses prior state-of-the-art approaches on multiple segmentation datasets with a remarkable boost in inference speed, while maintaining strong understanding capabilities.
PDF92December 2, 2025