ChatPaper.aiChatPaper

现代VBERT:迈向更轻量化的视觉文档检索器

ModernVBERT: Towards Smaller Visual Document Retrievers

October 1, 2025
作者: Paul Teiletche, Quentin Macé, Max Conti, Antonio Loison, Gautier Viaud, Pierre Colombo, Manuel Faysse
cs.AI

摘要

多模态嵌入模型正日益普及,尤其是在文档检索领域,作为纯文本流程的高效替代方案。这些模型通常通过在大规模视觉语言解码器(VLMs)上使用对比损失对文本-图像对进行微调来构建。在本研究中,我们表明,尽管这种再利用方法成本效益高,但往往会限制检索性能。通过对照实验,我们确立了一套改进视觉文档检索模型的原则性方案。我们特别评估了注意力掩码、图像分辨率、模态对齐数据策略以及以晚期交互为中心的对比目标的影响,这些因素被证明是影响性能的关键。基于这些洞见,我们发布了ModernVBERT,一个紧凑的2.5亿参数视觉语言编码器,在文档检索任务微调后,其性能超越了规模达其10倍的模型。模型与代码已发布于https://huggingface.co/ModernVBERT。
English
Multimodal embedding models are gaining prevalence, notably for document retrieval as efficient alternatives to text-only pipelines. These models are typically built by finetuning large vision-language decoders (VLMs) with contrastive losses on text-image pairs. In this work, we show that, while cost-efficient, this repurposing approach often bottlenecks retrieval performance. Through controlled experiments, we establish a principled recipe for improving visual document retrieval models. We notably measure the impact of attention masking, image resolution, modality alignment data regimes, and late interaction centered contrastive objectives which emerge as central performance factors. Building on these insights, we release ModernVBERT, a compact 250M-parameter vision-language encoder that outperforms models up to 10 times larger when finetuned on document retrieval tasks. Models and code are made available at https://huggingface.co/ModernVBERT.
PDF292October 3, 2025