ChatPaper.aiChatPaper

UNIDOC-BENCH:面向文档中心多模态RAG的统一基准

UNIDOC-BENCH: A Unified Benchmark for Document-Centric Multimodal RAG

October 4, 2025
作者: Xiangyu Peng, Cab Qin, Zeyuan Chen, Ran Xu, Caiming Xiong, Chien-Sheng Wu
cs.AI

摘要

多模态检索增强生成(MM-RAG)是将大型语言模型(LLMs)与智能体应用于现实世界知识库的关键方法,然而当前的评估体系较为零散,往往孤立地关注文本或图像,或局限于简化的多模态设置,未能充分体现文档中心的多模态应用场景。本文中,我们推出了UniDoc-Bench,这是首个基于8个领域、70,000页真实PDF文档构建的大规模、贴近实际的多模态检索增强生成基准。我们的处理流程从文本、表格及图表中提取并关联证据,进而生成了涵盖事实检索、比较、摘要及逻辑推理查询的1,600对多模态问答对。为确保数据可靠性,20%的问答对经过多位标注者及专家仲裁的验证。UniDoc-Bench支持在统一协议下,采用标准化的候选池、提示语及评估指标,对四种范式进行公平比较:(1)纯文本,(2)纯图像,(3)多模态文本-图像融合,以及(4)多模态联合检索。实验结果表明,多模态文本-图像融合的RAG系统在性能上持续超越单模态及基于联合多模态嵌入的检索,这证实了仅依赖文本或图像均不足够,且当前的多模态嵌入技术仍有待提升。除基准测试外,我们的分析还揭示了视觉语境在何时及如何补充文本证据,识别了系统性的失败模式,并为开发更健壮的多模态检索增强生成流程提供了可操作的指导。
English
Multimodal retrieval-augmented generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented, focusing on either text or images in isolation or on simplified multimodal setups that fail to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from 70k real-world PDF pages across eight domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates 1,600 multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, 20% of QA pairs are validated by multiple annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: (1) text-only, (2) image-only, (3) multimodal text-image fusion, and (4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.
PDF144October 10, 2025