ChatPaper.aiChatPaper

UNIDOC-BENCH:面向文档中心多模態檢索增強生成(RAG)的統一基準測試平台

UNIDOC-BENCH: A Unified Benchmark for Document-Centric Multimodal RAG

October 4, 2025
作者: Xiangyu Peng, Cab Qin, Zeyuan Chen, Ran Xu, Caiming Xiong, Chien-Sheng Wu
cs.AI

摘要

多模態檢索增強生成(MM-RAG)是將大型語言模型(LLMs)與代理應用於現實世界知識庫的關鍵方法,然而目前的評估較為零散,僅專注於單獨的文本或圖像,或是簡化的多模態設置,未能涵蓋以文檔為中心的多模態使用場景。本文介紹了UniDoc-Bench,這是首個基於70,000頁真實世界PDF文件、涵蓋八個領域的大規模、現實的多模態檢索增強生成基準。我們的流程從文本、表格和圖像中提取並鏈接證據,隨後生成1,600個多模態問答對,涵蓋事實檢索、比較、摘要和邏輯推理查詢。為確保可靠性,20%的問答對經過多位註釋者和專家裁決的驗證。UniDoc-Bench支持在四種範式下進行公平比較:(1)僅文本,(2)僅圖像,(3)多模態文本-圖像融合,以及(4)多模態聯合檢索——在統一的協議下,使用標準化的候選池、提示和評估指標。我們的實驗表明,多模態文本-圖像融合的RAG系統始終優於單模態和基於聯合多模態嵌入的檢索,這表明僅靠文本或圖像都不足夠,且當前的多模態嵌入仍顯不足。除了基準測試,我們的分析揭示了視覺上下文何時以及如何補充文本證據,揭示了系統性的失敗模式,並為開發更健壯的MM-RAG流程提供了可操作的指導。
English
Multimodal retrieval-augmented generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented, focusing on either text or images in isolation or on simplified multimodal setups that fail to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from 70k real-world PDF pages across eight domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates 1,600 multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, 20% of QA pairs are validated by multiple annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: (1) text-only, (2) image-only, (3) multimodal text-image fusion, and (4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.
PDF144October 10, 2025