ChatPaper.aiChatPaper

多模态基础模型能否理解示意图?一项关于科学论文信息检索问答的实证研究

Can Multimodal Foundation Models Understand Schematic Diagrams? An Empirical Study on Information-Seeking QA over Scientific Papers

July 14, 2025
作者: Yilun Zhao, Chengye Wang, Chuhan Li, Arman Cohan
cs.AI

摘要

本文介绍了MISS-QA,这是首个专门用于评估模型解读科学文献中示意图能力的基准测试。MISS-QA包含465篇科学论文中的1500个专家标注示例。在该基准中,模型需解读展示研究概览的示意图,并根据论文的广泛背景回答相应的信息检索问题。我们评估了18种前沿多模态基础模型的性能,包括o4-mini、Gemini-2.5-Flash和Qwen2.5-VL。结果显示,这些模型与人类专家在MISS-QA上的表现存在显著差距。我们对模型在无法回答问题上的表现分析及详细的错误分析,进一步揭示了当前模型的优势与局限,为提升模型理解多模态科学文献的能力提供了关键见解。
English
This paper introduces MISS-QA, the first benchmark specifically designed to evaluate the ability of models to interpret schematic diagrams within scientific literature. MISS-QA comprises 1,500 expert-annotated examples over 465 scientific papers. In this benchmark, models are tasked with interpreting schematic diagrams that illustrate research overviews and answering corresponding information-seeking questions based on the broader context of the paper. We assess the performance of 18 frontier multimodal foundation models, including o4-mini, Gemini-2.5-Flash, and Qwen2.5-VL. We reveal a significant performance gap between these models and human experts on MISS-QA. Our analysis of model performance on unanswerable questions and our detailed error analysis further highlight the strengths and limitations of current models, offering key insights to enhance models in comprehending multimodal scientific literature.
PDF101July 16, 2025