MRMR:一个面向推理密集型多模态检索的现实主义专家级跨学科基准
MRMR: A Realistic and Expert-Level Multidisciplinary Benchmark for Reasoning-Intensive Multimodal Retrieval
October 10, 2025
作者: Siyue Zhang, Yuan Gao, Xiao Zhou, Yilun Zhao, Tingyu Song, Arman Cohan, Anh Tuan Luu, Chen Zhao
cs.AI
摘要
我们推出了MRMR,这是首个需要深度推理的专家级多学科多模态检索基准。MRMR包含1,502个查询,涵盖23个领域,其正例文档均经过人类专家的严格验证。与以往基准相比,MRMR带来了三项关键创新。首先,它挑战检索系统跨越多个专业领域的能力,实现了跨领域的细粒度模型比较。其次,查询设计强调推理深度,如图像需进行深层解读,如显微镜切片诊断。我们进一步引入了矛盾检索这一新任务,要求模型识别相互冲突的概念。最后,查询与文档构建为图文交错的序列。不同于早期基准局限于单张图像或单模态文档,MRMR提供了多图像查询与混合模态语料库文档的真实场景。我们对四类多模态检索系统及14个前沿模型在MRMR上进行了广泛评估。采用LLM生成图像描述的文字嵌入模型Qwen3-Embedding表现最佳,凸显了多模态检索模型提升的巨大空间。尽管最新多模态模型如Ops-MM-Embedding在专家领域查询上表现不俗,但在推理密集型任务上仍显不足。我们相信,MRMR为推进多模态检索在更现实与挑战性场景中的应用铺平了道路。
English
We introduce MRMR, the first expert-level multidisciplinary multimodal
retrieval benchmark requiring intensive reasoning. MRMR contains 1,502 queries
spanning 23 domains, with positive documents carefully verified by human
experts. Compared to prior benchmarks, MRMR introduces three key advancements.
First, it challenges retrieval systems across diverse areas of expertise,
enabling fine-grained model comparison across domains. Second, queries are
reasoning-intensive, with images requiring deeper interpretation such as
diagnosing microscopic slides. We further introduce Contradiction Retrieval, a
novel task requiring models to identify conflicting concepts. Finally, queries
and documents are constructed as image-text interleaved sequences. Unlike
earlier benchmarks restricted to single images or unimodal documents, MRMR
offers a realistic setting with multi-image queries and mixed-modality corpus
documents. We conduct an extensive evaluation of 4 categories of multimodal
retrieval systems and 14 frontier models on MRMR. The text embedding model
Qwen3-Embedding with LLM-generated image captions achieves the highest
performance, highlighting substantial room for improving multimodal retrieval
models. Although latest multimodal models such as Ops-MM-Embedding perform
competitively on expert-domain queries, they fall short on reasoning-intensive
tasks. We believe that MRMR paves the way for advancing multimodal retrieval in
more realistic and challenging scenarios.