ChatPaper.aiChatPaper

MME-Reasoning:多模态大语言模型逻辑推理综合基准测试

MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs

May 27, 2025
作者: Jiakang Yuan, Tianshuo Peng, Yilei Jiang, Yiting Lu, Renrui Zhang, Kaituo Feng, Chaoyou Fu, Tao Chen, Lei Bai, Bo Zhang, Xiangyu Yue
cs.AI

摘要

逻辑推理是人类智能的核心要素,也是多模态大语言模型(MLLMs)不可或缺的能力。尽管多模态推理领域已取得显著进展,但现有基准测试因缺乏对逻辑推理类型的明确分类及对推理理解的模糊,未能全面评估其推理能力。为解决这些问题,我们推出了MME-Reasoning,一个旨在全面评估MLLMs推理能力的基准测试,其问题涵盖归纳、演绎和溯因三种推理类型。我们精心筛选数据,确保每个问题有效评估推理能力而非感知技能或知识广度,并扩展评估协议以覆盖多样化问题的评价。我们的评估揭示了当前最先进的MLLMs在逻辑推理能力整体评估中的显著局限,即便最先进的模型在综合逻辑推理上也表现有限,且在不同推理类型间存在明显的性能失衡。此外,我们深入分析了如“思维模式”和基于规则的强化学习等被认为能提升推理能力的方法。这些发现凸显了当前MLLMs在多样化逻辑推理场景中的关键局限与性能失衡,为理解和评估推理能力提供了全面而系统的洞见。
English
Logical reasoning is a fundamental aspect of human intelligence and an essential capability for multimodal large language models (MLLMs). Despite the significant advancement in multimodal reasoning, existing benchmarks fail to comprehensively evaluate their reasoning abilities due to the lack of explicit categorization for logical reasoning types and an unclear understanding of reasoning. To address these issues, we introduce MME-Reasoning, a comprehensive benchmark designed to evaluate the reasoning ability of MLLMs, which covers all three types of reasoning (i.e., inductive, deductive, and abductive) in its questions. We carefully curate the data to ensure that each question effectively evaluates reasoning ability rather than perceptual skills or knowledge breadth, and extend the evaluation protocols to cover the evaluation of diverse questions. Our evaluation reveals substantial limitations of state-of-the-art MLLMs when subjected to holistic assessments of logical reasoning capabilities. Even the most advanced MLLMs show limited performance in comprehensive logical reasoning, with notable performance imbalances across reasoning types. In addition, we conducted an in-depth analysis of approaches such as ``thinking mode'' and Rule-based RL, which are commonly believed to enhance reasoning abilities. These findings highlight the critical limitations and performance imbalances of current MLLMs in diverse logical reasoning scenarios, providing comprehensive and systematic insights into the understanding and evaluation of reasoning capabilities.

Summary

AI-Generated Summary

PDF783May 28, 2025