ChatPaper.aiChatPaper

DeCLIP:面向开放词汇密集感知的解耦学习

DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception

May 7, 2025
作者: Junjie Wang, Bin Chen, Yulin Li, Bin Kang, Yichi Chen, Zhuotao Tian
cs.AI

摘要

密集視覺預測任務一直受限於其對預定義類別的依賴,這限制了其在現實場景中的應用,因為現實中的視覺概念是無界的。儘管像CLIP這樣的視覺-語言模型(VLMs)在開放詞彙任務中展現了潛力,但將其直接應用於密集預測時,由於局部特徵表示的限制,往往導致性能不佳。在本研究中,我們觀察到CLIP的圖像標記難以有效聚合來自空間或語義相關區域的信息,從而產生的特徵缺乏局部區分性和空間一致性。為解決這一問題,我們提出了DeCLIP,這是一個新穎的框架,通過解耦自注意力模塊來分別獲取「內容」和「上下文」特徵,從而增強CLIP。「內容」特徵與圖像裁剪表示對齊,以提高局部區分性,而「上下文」特徵則在視覺基礎模型(如DINO)的指導下學習保留空間相關性。大量實驗表明,DeCLIP在多個開放詞彙密集預測任務(包括目標檢測和語義分割)中顯著優於現有方法。代碼可在magenta{https://github.com/xiaomoguhz/DeCLIP}獲取。
English
Dense visual prediction tasks have been constrained by their reliance on predefined categories, limiting their applicability in real-world scenarios where visual concepts are unbounded. While Vision-Language Models (VLMs) like CLIP have shown promise in open-vocabulary tasks, their direct application to dense prediction often leads to suboptimal performance due to limitations in local feature representation. In this work, we present our observation that CLIP's image tokens struggle to effectively aggregate information from spatially or semantically related regions, resulting in features that lack local discriminability and spatial consistency. To address this issue, we propose DeCLIP, a novel framework that enhances CLIP by decoupling the self-attention module to obtain ``content'' and ``context'' features respectively. The ``content'' features are aligned with image crop representations to improve local discriminability, while ``context'' features learn to retain the spatial correlations under the guidance of vision foundation models, such as DINO. Extensive experiments demonstrate that DeCLIP significantly outperforms existing methods across multiple open-vocabulary dense prediction tasks, including object detection and semantic segmentation. Code is available at magenta{https://github.com/xiaomoguhz/DeCLIP}.

Summary

AI-Generated Summary

PDF352May 15, 2025