ChatPaper.aiChatPaper

MoCa:模態感知的持續預訓練提升雙向多模態嵌入質量

MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings

June 29, 2025
作者: Haonan Chen, Hong Liu, Yuping Luo, Liang Wang, Nan Yang, Furu Wei, Zhicheng Dou
cs.AI

摘要

基於因果視覺語言模型(VLMs)構建的多模態嵌入模型,在多種任務中展現出潛力。然而,當前方法面臨三個主要限制:VLM骨幹中因果注意力的使用對於嵌入任務並非最優;依賴高質量標註配對數據進行對比學習所帶來的可擴展性問題;以及訓練目標和數據多樣性的不足。為解決這些問題,我們提出了MoCa,一個將預訓練VLMs轉化為高效雙向多模態嵌入模型的兩階段框架。第一階段,模態感知的持續預訓練,引入了一種聯合重建目標,同時對交錯的文本和圖像輸入進行去噪,增強了雙向上下文感知推理能力。第二階段,異質對比微調,利用超越簡單圖像-字幕對的多樣化、語義豐富的多模態數據,以提升泛化能力和對齊效果。我們的方法通過持續預訓練引入雙向注意力,利用聯合重建目標有效擴展至大規模未標註數據集,並使用多樣化的多模態數據來增強表徵的魯棒性,從而解決了上述限制。實驗表明,MoCa在MMEB和ViDoRe-v2基準測試中持續提升性能,達到了新的最先進水平,並在MMEB上展現出與模型規模和訓練數據的強大可擴展性。
English
Multimodal embedding models, built upon causal Vision Language Models (VLMs), have shown promise in various tasks. However, current approaches face three key limitations: the use of causal attention in VLM backbones is suboptimal for embedding tasks; scalability issues due to reliance on high-quality labeled paired data for contrastive learning; and limited diversity in training objectives and data. To address these issues, we propose MoCa, a two-stage framework for transforming pre-trained VLMs into effective bidirectional multimodal embedding models. The first stage, Modality-aware Continual Pre-training, introduces a joint reconstruction objective that simultaneously denoises interleaved text and image inputs, enhancing bidirectional context-aware reasoning. The second stage, Heterogeneous Contrastive Fine-tuning, leverages diverse, semantically rich multimodal data beyond simple image-caption pairs to enhance generalization and alignment. Our method addresses the stated limitations by introducing bidirectional attention through continual pre-training, scaling effectively with massive unlabeled datasets via joint reconstruction objectives, and utilizing diverse multimodal data for enhanced representation robustness. Experiments demonstrate that MoCa consistently improves performance across MMEB and ViDoRe-v2 benchmarks, achieving new state-of-the-art results, and exhibits strong scalability with both model size and training data on MMEB.
PDF321July 2, 2025