ChatPaper.aiChatPaper

三塔:使用预训练图像模型进行灵活的对比学习

Three Towers: Flexible Contrastive Learning with Pretrained Image Models

May 26, 2023
作者: Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
cs.AI

摘要

我们引入了Three Towers (3T),这是一种灵活的方法,通过整合预训练图像分类器来提高视觉-语言模型的对比学习能力。虽然对比模型通常是从头开始训练的,但LiT (Zhai等,2022) 最近表明利用预训练分类器嵌入可以提高性能。然而,LiT直接用冻结的嵌入替换图像塔,排除了对比训练图像塔的任何潜在好处。通过3T,我们提出了一种更灵活的策略,允许图像塔从预训练嵌入和对比训练中受益。为实现这一目标,我们引入了第三个塔,其中包含冻结的预训练嵌入,并鼓励这第三塔与主要的图像-文本塔之间的对齐。实证结果表明,3T在检索任务中始终优于LiT和CLIP风格的从头开始基线。对于分类任务,3T相对于从头开始的基线可靠地提升,尽管对于JFT预训练模型,它表现不及LiT,但对于ImageNet-21k和Places365预训练,它胜过LiT。
English
We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits of contrastively training the image tower. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.
PDF20December 15, 2024