ChatPaper.aiChatPaper

AutoCLIP:为视觉-语言模型自动调整零样本分类器

AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models

September 28, 2023
作者: Jan Hendrik Metzen, Piyapat Saranrittichai, Chaithanya Kumar Mummadi
cs.AI

摘要

基于视觉-语言模型(如CLIP)构建的分类器在广泛的图像分类任务中展现出卓越的零样本性能。先前的研究探讨了不同的自动创建每个类别描述符集的方式,这些方式基于提示模板,包括手动设计的模板、从大型语言模型中获取的模板,以及从随机单词和字符构建的模板。相比之下,从相应编码的类别描述符中派生零样本分类器几乎没有改变,即:将图像分类到使其平均编码的类别描述符与编码图像之间的余弦相似度最大化的类别。然而,当某些描述符与给定图像上的视觉线索更匹配时,将所有类别描述符权重相同可能不是最佳选择。在这项工作中,我们提出了AutoCLIP,一种用于自动调整零样本分类器的方法。AutoCLIP为每个提示模板分配每个图像权重,这些权重是根据推断时类别描述符-图像相似性的统计数据派生的。AutoCLIP是完全无监督的,开销非常低,并且可以轻松地用几行代码实现。我们展示了对于广泛的视觉-语言模型、数据集和提示模板,AutoCLIP始终比基准表现更好,准确率提高了最多3个百分点。
English
Classifiers built upon vision-language models such as CLIP have shown remarkable zero-shot performance across a broad range of image classification tasks. Prior work has studied different ways of automatically creating descriptor sets for every class based on prompt templates, ranging from manually engineered templates over templates obtained from a large language model to templates built from random words and characters. In contrast, deriving zero-shot classifiers from the respective encoded class descriptors has remained nearly unchanged, that is: classify to the class that maximizes the cosine similarity between its averaged encoded class descriptors and the encoded image. However, weighting all class descriptors equally can be suboptimal when certain descriptors match visual clues on a given image better than others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot classifiers. AutoCLIP assigns to each prompt template per-image weights, which are derived from statistics of class descriptor-image similarities at inference time. AutoCLIP is fully unsupervised, has very low overhead, and can be easily implemented in few lines of code. We show that for a broad range of vision-language models, datasets, and prompt templates, AutoCLIP outperforms baselines consistently and by up to 3 percent point accuracy.
PDF192December 15, 2024