ChatPaper.aiChatPaper

VLM2Vec:为大规模多模态嵌入任务训练视觉-语言模型

VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks

October 7, 2024
作者: Ziyan Jiang, Rui Meng, Xinyi Yang, Semih Yavuz, Yingbo Zhou, Wenhu Chen
cs.AI

摘要

嵌入模型在实现语义相似性、信息检索和聚类等各种下游任务方面发挥了关键作用。最近,人们对开发能够跨任务泛化的通用文本嵌入模型表现出了极大的兴趣(例如MTEB)。然而,尽管它们的重要性,学习通用多模态嵌入模型的进展相对较慢。在这项工作中,我们旨在探索构建能够处理各种下游任务的通用嵌入的潜力。我们的贡献有两个方面:(1)MMEB(大规模多模态嵌入基准),涵盖4个元任务(即分类、视觉问答、多模态检索和视觉定位)和36个数据集,包括20个训练数据集和16个评估数据集;(2)VLM2Vec(视觉-语言模型到向量),这是一个对比训练框架,通过在MMEB上进行训练,将任何最先进的视觉-语言模型转换为嵌入模型。与之前的模型(如CLIP和BLIP)不同,VLM2Vec可以处理任何图像和文本的组合,根据任务说明生成固定维度的向量。我们在Phi-3.5-V上构建了一系列VLM2Vec模型,并在MMEB的评估集上进行评估。我们的结果表明,该模型在MMEB的分布内和分布外数据集上相对于现有的多模态嵌入模型实现了10%到20%的绝对平均改进。
English
Embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering. Recently, there has been a surge of interest in developing universal text embedding models that can generalize across tasks (e.g., MTEB). However, progress in learning universal multimodal embedding models has been relatively slow despite their importance. In this work, we aim to explore the potential for building universal embeddings capable of handling a wide range of downstream tasks. Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks (i.e. classification, visual question answering, multimodal retrieval, and visual grounding) and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB. Unlike previous models such as CLIP and BLIP, VLM2Vec can process any combination of images and text to generate a fixed-dimensional vector based on task instructions. We build a series of VLM2Vec models on Phi-3.5-V and evaluate them on MMEB's evaluation split. Our results show that \model achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models on both in-distribution and out-of-distribution datasets in MMEB.

Summary

AI-Generated Summary

PDF42November 16, 2024