ChatPaper.aiChatPaper

ABC:利用視覺語言模型實現多模態嵌入的更好控制

ABC: Achieving Better Control of Multimodal Embeddings using VLMs

March 1, 2025
作者: Benjamin Schneider, Florian Kerschbaum, Wenhu Chen
cs.AI

摘要

視覺嵌入模型在零樣本任務(如視覺檢索和分類)中表現出色。然而,這些模型無法用於包含模糊性或需要用戶指令的任務。這些任務需要多模態嵌入模型,該模型能夠輸出結合視覺和自然語言輸入的嵌入。現有的基於CLIP的方法獨立地嵌入圖像和文本,然後融合結果。我們發現這種方法導致模態間的交互較弱,且用戶對表示的控制力不足。我們介紹了ABC,這是一個開源的多模態嵌入模型,它使用視覺-語言模型骨幹來深度整合圖像特徵與自然語言指令。ABC在MSCOCO圖像到文本檢索中實現了最佳尺寸性能,並在Massive Multimodal Embedding Benchmark的分類和視覺問答任務中表現最佳。憑藉強統一的視覺-語言表示,ABC能夠利用自然語言解決細微且可能模糊的視覺檢索問題。為了評估這一能力,我們設計了CtrlBench,這是一個需要交織文本指令與圖像內容以進行正確檢索的基準。ABC通過提供高質量的表示和靈活的自然語言控制,推動了多模態嵌入的發展。我們的模型和數據集可在項目頁面獲取。
English
Visual embedding models excel at zero-shot tasks like visual retrieval and classification. However, these models cannot be used for tasks that contain ambiguity or require user instruction. These tasks necessitate a multimodal embedding model, which outputs embeddings that combine visual and natural language input. Existing CLIP-based approaches embed images and text independently, and fuse the result. We find that this results in weak interactions between modalities, and poor user control over the representation. We introduce ABC, an open-source multimodal embedding model that uses a vision-language model backbone to deeply integrate image features with natural language instructions. ABC achieves bestfor-size performance on MSCOCO image-to-text retrieval and is the top performing model on classification and VQA tasks in the Massive Multimodal Embedding Benchmark. With a strongly unified vision-language representation, ABC can use natural language to solve subtle and potentially ambiguous visual retrieval problems. To evaluate this capability, we design CtrlBench, a benchmark that requires interleaving textual instructions with image content for correct retrieval. ABC advances the state of multimodal embeddings by offering high-quality representations and flexible natural language control. Our model and datasets are available at our project page.

Summary

AI-Generated Summary

PDF194March 6, 2025