在图像文本预训练中改善细粒度理解
Improving fine-grained understanding in image-text pre-training
January 18, 2024
作者: Ioana Bica, Anastasija Ilić, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrović
cs.AI
摘要
我们介绍了SPARse Fine-grained Contrastive Alignment(SPARC),这是一种简单的方法,用于从图像-文本对中预训练更精细的多模态表示。鉴于多个图像补丁通常对应单个单词,我们提出为每个标题中的标记学习图像补丁的分组。为实现这一目标,我们使用稀疏相似度度量来衡量图像补丁和语言标记之间的关系,并为每个标记计算一个语言分组的视觉嵌入,作为补丁的加权平均值。然后,通过一种细粒度的序列损失,对标记和语言分组的视觉嵌入进行对比,该损失仅取决于个别样本,不需要其他批次样本作为负样本。这使得能够以计算成本低的方式学习更详细的信息。SPARC将这种细粒度损失与全局图像和文本嵌入之间的对比损失相结合,以学习同时编码全局和局部信息的表示。我们对我们提出的方法进行了彻底评估,并展示了在依赖粗粒度信息的图像级任务(例如分类)以及依赖细粒度信息的区域级任务(例如检索、目标检测和分割)上,相对竞争方法表现出更好的性能。此外,SPARC提高了模型的忠实度和基础视觉-语言模型中的字幕生成能力。
English
We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple
method for pretraining more fine-grained multimodal representations from
image-text pairs. Given that multiple image patches often correspond to single
words, we propose to learn a grouping of image patches for every token in the
caption. To achieve this, we use a sparse similarity metric between image
patches and language tokens and compute for each token a language-grouped
vision embedding as the weighted average of patches. The token and
language-grouped vision embeddings are then contrasted through a fine-grained
sequence-wise loss that only depends on individual samples and does not require
other batch samples as negatives. This enables more detailed information to be
learned in a computationally inexpensive manner. SPARC combines this
fine-grained loss with a contrastive loss between global image and text
embeddings to learn representations that simultaneously encode global and local
information. We thoroughly evaluate our proposed method and show improved
performance over competing approaches both on image-level tasks relying on
coarse-grained information, e.g. classification, as well as region-level tasks
relying on fine-grained information, e.g. retrieval, object detection, and
segmentation. Moreover, SPARC improves model faithfulness and captioning in
foundational vision-language models.