AdaptCLIP:面向通用视觉异常检测的CLIP自适应方法
AdaptCLIP: Adapting CLIP for Universal Visual Anomaly Detection
May 15, 2025
作者: Bin-Bin Gao, Yue Zhu, Jiangtao Yan, Yuezhi Cai, Weixi Zhang, Meng Wang, Jun Liu, Yong Liu, Lei Wang, Chengjie Wang
cs.AI
摘要
通用视觉异常检测旨在无需额外微调的情况下,从新颖或未见过的视觉领域中识别异常,这在开放场景中至关重要。近期研究表明,如CLIP等预训练的视觉-语言模型仅需零样本或少量正常图像即可展现出强大的泛化能力。然而,现有方法在设计提示模板、处理复杂令牌交互或需要额外微调方面存在困难,导致灵活性受限。本研究中,我们提出了一种名为AdaptCLIP的简单而有效的方法,基于两个关键洞见:首先,视觉与文本的适应性表示应交替而非联合学习;其次,查询与正常图像提示之间的对比学习应结合上下文特征与对齐的残差特征,而非仅依赖残差特征。AdaptCLIP将CLIP模型视为基础服务,仅在其输入或输出端添加三个简单适配器——视觉适配器、文本适配器及提示-查询适配器。AdaptCLIP支持跨领域的零样本/少样本泛化,并在基础数据集上训练后,在目标领域上无需训练即可应用。AdaptCLIP在工业和医疗领域的12个异常检测基准测试中取得了最先进的性能,显著超越了现有竞争方法。我们将在https://github.com/gaobb/AdaptCLIP上公开AdaptCLIP的代码与模型。
English
Universal visual anomaly detection aims to identify anomalies from novel or
unseen vision domains without additional fine-tuning, which is critical in open
scenarios. Recent studies have demonstrated that pre-trained vision-language
models like CLIP exhibit strong generalization with just zero or a few normal
images. However, existing methods struggle with designing prompt templates,
complex token interactions, or requiring additional fine-tuning, resulting in
limited flexibility. In this work, we present a simple yet effective method
called AdaptCLIP based on two key insights. First, adaptive visual and textual
representations should be learned alternately rather than jointly. Second,
comparative learning between query and normal image prompt should incorporate
both contextual and aligned residual features, rather than relying solely on
residual features. AdaptCLIP treats CLIP models as a foundational service,
adding only three simple adapters, visual adapter, textual adapter, and
prompt-query adapter, at its input or output ends. AdaptCLIP supports
zero-/few-shot generalization across domains and possesses a training-free
manner on target domains once trained on a base dataset. AdaptCLIP achieves
state-of-the-art performance on 12 anomaly detection benchmarks from industrial
and medical domains, significantly outperforming existing competitive methods.
We will make the code and model of AdaptCLIP available at
https://github.com/gaobb/AdaptCLIP.Summary
AI-Generated Summary