ChatPaper.aiChatPaper

TransAgent:利用异构智能体协作传递视觉-语言基础模型

TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration

October 16, 2024
作者: Yiwei Guo, Shaobin Zhuang, Kunchang Li, Yu Qiao, Yali Wang
cs.AI

摘要

视觉-语言基础模型(如CLIP)最近展示了它们在迁移学习中的强大能力,这归功于大规模图像-文本预训练。然而,下游任务中的目标领域数据可能与预训练阶段大不相同,这使得这样一个单一模型很难很好地泛化。相反,存在着各种专家模型,这些模型包含了在不同形式、任务、网络和数据集上预训练的多样化视觉和/或语言知识。不幸的是,这些模型是“孤立代理”,具有异构结构,如何整合它们的知识以实现CLIP类模型的泛化尚未得到充分探讨。为了弥合这一差距,我们提出了一个通用而简洁的TransAgent框架,以统一方式传输孤立代理的知识,并有效地指导CLIP通过多源知识蒸馏实现泛化。通过这样一个独特的框架,我们灵活地与11个异构代理合作,为视觉-语言基础模型赋能,而无需在推理阶段增加额外成本。最后,我们的TransAgent在11个视觉识别数据集上实现了最先进的性能。在相同的低样本设置下,它的平均表现优于流行的CoOp约10%,在包含大领域转移的EuroSAT上优于20%。
English
Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training phase, which makes it hard for such a single model to generalize well. Alternatively, there exists a wide range of expert models that contain diversified vision and/or language knowledge pre-trained on different modalities, tasks, networks, and datasets. Unfortunately, these models are "isolated agents" with heterogeneous structures, and how to integrate their knowledge for generalizing CLIP-like models has not been fully explored. To bridge this gap, we propose a general and concise TransAgent framework, which transports the knowledge of the isolated agents in a unified manner, and effectively guides CLIP to generalize with multi-source knowledge distillation. With such a distinct framework, we flexibly collaborate with 11 heterogeneous agents to empower vision-language foundation models, without further cost in the inference phase. Finally, our TransAgent achieves state-of-the-art performance on 11 visual recognition datasets. Under the same low-shot setting, it outperforms the popular CoOp with around 10% on average, and 20% on EuroSAT which contains large domain shifts.

Summary

AI-Generated Summary

PDF42November 16, 2024