ChatPaper.aiChatPaper

无需标签的视觉语言模型适配:全面综述

Adapting Vision-Language Models Without Labels: A Comprehensive Survey

August 7, 2025
作者: Hao Dong, Lijun Sheng, Jian Liang, Ran He, Eleni Chatzi, Olga Fink
cs.AI

摘要

视觉-语言模型(VLMs)在广泛的任务中展现出了卓越的泛化能力。然而,在未经任务特定适配的情况下直接应用于具体下游场景时,其表现往往不尽如人意。为了在保持数据效率的同时提升其实用性,近期研究日益聚焦于不依赖标注数据的无监督适配方法。尽管这一领域兴趣渐增,但尚缺乏一份专门针对无监督VLM适配的统一、任务导向的综述。为填补这一空白,我们提供了该领域的全面且结构化的概览。我们基于未标注视觉数据的可用性与性质提出了一种分类体系,将现有方法归纳为四大关键范式:无数据迁移(无数据)、无监督域迁移(数据丰富)、批次测试时适配(批量数据)以及在线测试时适配(流数据)。在此框架下,我们分析了与各范式相关的核心方法论与适配策略,旨在建立对该领域的系统性理解。此外,我们回顾了跨多样应用的代表性基准,并指出了开放挑战与未来研究的潜在方向。相关文献的持续更新资源库可访问https://github.com/tim-learn/Awesome-LabelFree-VLMs。
English
Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.
PDF102August 11, 2025