ChatPaper.aiChatPaper

扩散模型:通用分割学习器的革新路径

Diffusion Model as a Generalist Segmentation Learner

April 27, 2026
作者: Haoxiao Wang, Antao Xiang, Haiyang Sun, Peilin Sun, Changhao Pan, Yifu Chen, Minjie Hong, Weijie Wang, Shuang Chen, Yue Chen, Zhou Zhao
cs.AI

摘要

擴散模型主要訓練用於影像合成,但其去噪軌跡蘊含豐富的空間對齊視覺先驗。本文論證這些先驗可應用於文本條件下的語義分割與開放詞彙分割,並能泛化至各類下游任務,形成通用型擴散分割框架。具體而言,我們提出DiGSeg(擴散模型作為通用分割學習器),將預訓練擴散模型重構為統一分割架構。該方法將輸入影像與真實標註編碼至潛在空間,並將其拼接為擴散U-Net的條件信號。並行的CLIP對齊文本通路在多尺度注入語言特徵,使模型能將文本查詢與演進中的視覺表徵對齊。此設計將現成擴散骨幹轉化為通用介面,可根據外觀特徵與任意文本提示生成結構化分割掩碼。大量實驗表明,該方法在標準語義分割基準上達到最先進性能,同時在開放詞彙泛化與跨領域遷移(醫療、遙感、農業場景)中展現強大適應性——無需領域特定的架構定制。這些結果表明現代擴散骨幹可作為通用分割學習器而非純生成器,縮小了視覺生成與視覺理解之間的鴻溝。
English
Diffusion models are primarily trained for image synthesis, yet their denoising trajectories encode rich, spatially aligned visual priors. In this paper, we demonstrate that these priors can be utilized for text-conditioned semantic and open-vocabulary segmentation, and this approach can be generalized to various downstream tasks to make a general-purpose diffusion segmentation framework. Concretely, we introduce DiGSeg (Diffusion Models as a Generalist Segmentation Learner), which repurposes a pretrained diffusion model into a unified segmentation framework. Our approach encodes the input image and ground-truth mask into the latent space and concatenates them as conditioning signals for the diffusion U-Net. A parallel CLIP-aligned text pathway injects language features across multiple scales, enabling the model to align textual queries with evolving visual representations. This design transforms an off-the-shelf diffusion backbone into a universal interface that produces structured segmentation masks conditioned on both appearance and arbitrary text prompts. Extensive experiments demonstrate state-of-the-art performance on standard semantic segmentation benchmarks, as well as strong open-vocabulary generalization and cross-domain transfer to medical, remote sensing, and agricultural scenarios-without domain-specific architectural customization. These results indicate that modern diffusion backbones can serve as generalist segmentation learners rather than pure generators, narrowing the gap between visual generation and visual understanding.
PDF21May 8, 2026