VLM2Vec:訓練視覺語言模型以應對大規模多模態嵌入任務
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
October 7, 2024
作者: Ziyan Jiang, Rui Meng, Xinyi Yang, Semih Yavuz, Yingbo Zhou, Wenhu Chen
cs.AI
摘要
嵌入模型在實現語義相似性、資訊檢索和分群等各種下游任務方面至關重要。最近,開發能夠橫跨任務的通用文本嵌入模型(例如MTEB)引起了廣泛興趣。然而,盡管其重要性,學習通用多模態嵌入模型的進展相對緩慢。在這項工作中,我們旨在探索建立能夠應對各種下游任務的通用嵌入的潛力。我們的貢獻有兩個方面:(1)MMEB(大規模多模態嵌入基準),涵蓋4個元任務(即分類、視覺問答、多模態檢索和視覺定位)和36個數據集,包括20個訓練和16個評估數據集;以及(2)VLM2Vec(視覺語言模型轉向向量),一個對比訓練框架,通過在MMEB上進行訓練,將任何最先進的視覺語言模型轉換為嵌入模型。與以往的模型(如CLIP和BLIP)不同,VLM2Vec能夠處理任何圖像和文本組合,根據任務指示生成固定維度的向量。我們在Phi-3.5-V上構建了一系列VLM2Vec模型,並在MMEB的評估分割上進行評估。我們的結果顯示,該模型在MMEB的分發和非分發數據集上相對於現有的多模態嵌入模型實現了10%至20%的絕對平均改善。
English
Embedding models have been crucial in enabling various downstream tasks such
as semantic similarity, information retrieval, and clustering. Recently, there
has been a surge of interest in developing universal text embedding models that
can generalize across tasks (e.g., MTEB). However, progress in learning
universal multimodal embedding models has been relatively slow despite their
importance. In this work, we aim to explore the potential for building
universal embeddings capable of handling a wide range of downstream tasks. Our
contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark),
which covers 4 meta-tasks (i.e. classification, visual question answering,
multimodal retrieval, and visual grounding) and 36 datasets, including 20
training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model ->
Vector), a contrastive training framework that converts any state-of-the-art
vision-language model into an embedding model via training on MMEB. Unlike
previous models such as CLIP and BLIP, VLM2Vec can process any combination of
images and text to generate a fixed-dimensional vector based on task
instructions. We build a series of VLM2Vec models on Phi-3.5-V and evaluate
them on MMEB's evaluation split. Our results show that \model achieves an
absolute average improvement of 10% to 20% over existing multimodal embedding
models on both in-distribution and out-of-distribution datasets in MMEB.Summary
AI-Generated Summary