ChatPaper.aiChatPaper

MMEmb-R1:具备推理增强功能的多模态嵌入模型,集成配对感知选择与自适应控制机制

MMEmb-R1: Reasoning-Enhanced Multimodal Embedding with Pair-Aware Selection and Adaptive Control

April 7, 2026
作者: Yuchi Wang, Haiyang Yu, Weikang Bian, Jiefeng Long, Xiao Liang, Chao Feng, Hongsheng Li
cs.AI

摘要

多模态大语言模型(MLLMs)虽已成功应用于多模态嵌入任务,但其生成式推理能力尚未得到充分利用。将思维链推理直接引入嵌入学习会面临两个根本性挑战:首先,实例级推理与成对对比监督之间的结构错位可能导致模型产生捷径行为,仅学会推理的表面形式;其次,推理并非对所有嵌入任务都有益。强制所有输入进行推理不仅会引入不必要的计算和延迟,对于简单案例甚至可能掩盖显著的语义信号。为解决这些问题,我们提出MMEmb-R1——一种基于自适应推理的多模态嵌入框架。我们将推理建模为潜变量,并引入面向样本对的推理选择机制,通过反事实干预识别有助于查询-目标对齐的推理路径。此外,我们采用强化学习技术实现按需触发推理。在MMEB-V2基准测试上的实验表明,仅使用40亿参数的模型即可达到71.2分的综合得分,在显著降低推理开销和推断延迟的同时创造了新的性能纪录。
English
MLLMs have been successfully applied to multimodal embedding tasks, yet their generative reasoning capabilities remain underutilized. Directly incorporating chain-of-thought reasoning into embedding learning introduces two fundamental challenges. First, structural misalignment between instance-level reasoning and pairwise contrastive supervision may lead to shortcut behavior, where the model merely learns the superficial format of reasoning. Second, reasoning is not universally beneficial for embedding tasks. Enforcing reasoning for all inputs may introduce unnecessary computation and latency, and can even obscure salient semantic signals for simple cases. To address these issues, we propose MMEmb-R1, an adaptive reasoning-based multimodal embedding framework. We formulate reasoning as a latent variable and introduce pair-aware reasoning selection that employs counterfactual intervention to identify reasoning paths beneficial for query-target alignment. Furthermore, we adopt reinforcement learning to selectively invoke reasoning only when necessary. Experiments on the MMEB-V2 benchmark demonstrate that our model achieves a score of 71.2 with only 4B parameters, establishing a new state-of-the-art while significantly reducing reasoning overhead and inference latency.
PDF71April 9, 2026