ChatPaper.aiChatPaper

MicroVQA:面向显微科学研究的跨模态推理基准

MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research

March 17, 2025
作者: James Burgess, Jeffrey J Nirschl, Laura Bravo-Sánchez, Alejandro Lozano, Sanket Rajan Gupte, Jesus G. Galaz-Montoya, Yuhui Zhang, Yuchang Su, Disha Bhowmik, Zachary Coman, Sarina M. Hasan, Alexandra Johannesson, William D. Leineweber, Malvika G Nair, Ridhi Yarlagadda, Connor Zuraski, Wah Chiu, Sarah Cohen, Jan N. Hansen, Manuel D Leonetti, Chad Liu, Emma Lundberg, Serena Yeung-Levy
cs.AI

摘要

科学研究要求对多模态数据进行复杂的推理,这一挑战在生物学领域尤为突出。尽管多模态大语言模型(MLLMs)在AI辅助研究方面取得了最新进展,现有的多模态推理基准仅针对大学水平难度,而研究级基准则侧重于低层次的感知,未能满足科学发现所需的复杂多模态推理。为填补这一空白,我们推出了MicroVQA,一个视觉问答(VQA)基准,旨在评估研究流程中至关重要的三种推理能力:专家级图像理解、假设生成及实验设计。MicroVQA包含1,042道由生物学专家精心策划的多选题(MCQs),涵盖多种显微镜技术,确保VQA样本反映真实的科学实践。在构建基准过程中,我们发现标准MCQ生成方法易引发语言捷径,因此提出了一种新的两阶段流程:首先,优化的LLM提示将问答对结构化为MCQs;随后,基于代理的“RefineBot”对其进行更新以消除捷径。对最先进的MLLMs进行基准测试显示,最高准确率为53%;较小LLM的模型仅略逊于顶级模型,表明基于语言的推理挑战性低于多模态推理;而利用科学文献进行调优可提升性能。专家对思维链响应的分析表明,感知错误最为常见,其次是知识错误和过度泛化错误。这些洞见凸显了多模态科学推理的挑战,证明MicroVQA是推动AI驱动生物医学研究的重要资源。MicroVQA可在https://huggingface.co/datasets/jmhb/microvqa获取,项目页面位于https://jmhb0.github.io/microvqa。
English
Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment proposal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based `RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of 53\%; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgeneralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available at https://huggingface.co/datasets/jmhb/microvqa, and project page at https://jmhb0.github.io/microvqa.

Summary

AI-Generated Summary

PDF212March 18, 2025