ChatPaper.aiChatPaper

無標籤適應視覺語言模型:全面綜述

Adapting Vision-Language Models Without Labels: A Comprehensive Survey

August 7, 2025
作者: Hao Dong, Lijun Sheng, Jian Liang, Ran He, Eleni Chatzi, Olga Fink
cs.AI

摘要

視覺-語言模型(VLMs)在多種任務中展現了卓越的泛化能力。然而,在未經任務特定適應的情況下直接應用於特定下游場景時,其表現往往仍不盡理想。為了在保持數據效率的同時提升其應用價值,近期研究日益聚焦於不依賴標籤數據的無監督適應方法。儘管這一領域的關注度不斷上升,但尚缺乏一個統一且以任務為導向的無監督VLM適應綜述。為填補這一空白,本文提供了該領域的全面且結構化的概述。我們基於未標記視覺數據的可用性與性質,提出了一種分類法,將現有方法歸納為四大關鍵範式:無數據遷移(無數據)、無監督領域遷移(豐富數據)、片段式測試時適應(批量數據)以及線上測試時適應(流數據)。在此框架內,我們分析了與各範式相關的核心方法論及適應策略,旨在建立對該領域的系統性理解。此外,我們回顧了跨多樣應用場景的代表性基準,並指出了未來研究中的開放性挑戰與潛在方向。相關文獻的動態維護庫可於https://github.com/tim-learn/Awesome-LabelFree-VLMs獲取。
English
Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.
PDF102August 11, 2025