图像字幕生成器也是可扩展的视觉学习者
Image Captioners Are Scalable Vision Learners Too
June 13, 2023
作者: Michael Tschannen, Manoj Kumar, Andreas Steiner, Xiaohua Zhai, Neil Houlsby, Lucas Beyer
cs.AI
摘要
在网络图像文本对上进行对比预训练是视觉主干中最流行的大规模预训练策略之一,尤其是在大型多模态模型的背景下。与此同时,在这种类型的数据上进行图像字幕生成通常被认为是一种较差的预训练策略。本文对这两种预训练策略进行了公平比较,精心匹配训练数据、计算和模型容量。使用标准的编码器-解码器Transformer,我们发现仅进行字幕生成就能取得令人惊讶的效果:在分类任务上,字幕生成产生的视觉编码器与对比预训练编码器相媲美,同时在视觉与语言任务上超越了它们。我们进一步分析了模型架构和规模,以及预训练数据对表示质量的影响,发现字幕生成在这些方面表现出相同或更好的扩展行为。总体而言,我们的结果表明,普通的图像字幕生成比以前认为的更为强大作为一种预训练策略。
English
Contrastive pretraining on image-text pairs from the web is one of the most
popular large-scale pretraining strategies for vision backbones, especially in
the context of large multimodal models. At the same time, image captioning on
this type of data is commonly considered an inferior pretraining strategy. In
this paper, we perform a fair comparison of these two pretraining strategies,
carefully matching training data, compute, and model capacity. Using a standard
encoder-decoder transformer, we find that captioning alone is surprisingly
effective: on classification tasks, captioning produces vision encoders
competitive with contrastively pretrained encoders, while surpassing them on
vision & language tasks. We further analyze the effect of the model
architecture and scale, as well as the pretraining data on the representation
quality, and find that captioning exhibits the same or better scaling behavior
along these axes. Overall our results show that plain image captioning is a
more powerful pretraining strategy than was previously believed.