多模態不一致性推理(MMIR):多模態推理模型的新基準
Multimodal Inconsistency Reasoning (MMIR): A New Benchmark for Multimodal Reasoning Models
February 22, 2025
作者: Qianqi Yan, Yue Fan, Hongquan Li, Shan Jiang, Yang Zhao, Xinze Guan, Ching-Chen Kuo, Xin Eric Wang
cs.AI
摘要
現有的多模態大型語言模型(MLLMs)主要是在視覺與文本一致的輸入上進行訓練和測試,這使得它們能否處理現實世界中佈局豐富內容的不一致性成為一個開放性問題。為彌補這一差距,我們提出了多模態不一致性推理(MMIR)基準,以評估MLLMs在檢測和推理網頁、演示文稿和海報等人工製品中語義不匹配的能力。MMIR包含534個具有挑戰性的樣本,每個樣本在五個推理密集的類別中人工注入了錯誤:事實矛盾、身份誤認、上下文不匹配、數量差異以及時空不一致。我們評估了六種最先進的MLLMs,結果顯示,具備專門多模態推理能力的模型(如o1)顯著優於其他模型,而開源模型尤其容易受到不一致性錯誤的影響。詳細的錯誤分析進一步表明,模型在檢測單一模態(特別是文本)內的不一致性方面表現出色,但在處理跨模態衝突和複雜佈局時則顯得力不從心。探針實驗揭示,單模態提示方法,包括思維鏈(CoT)和標記集(SoM)方法,僅帶來邊際效益,這揭示了跨模態推理中的一個關鍵瓶頸。我們的研究結果強調了對先進多模態推理的需求,並為未來關於多模態不一致性的研究指明瞭方向。
English
Existing Multimodal Large Language Models (MLLMs) are predominantly trained
and tested on consistent visual-textual inputs, leaving open the question of
whether they can handle inconsistencies in real-world, layout-rich content. To
bridge this gap, we propose the Multimodal Inconsistency Reasoning (MMIR)
benchmark to assess MLLMs' ability to detect and reason about semantic
mismatches in artifacts such as webpages, presentation slides, and posters.
MMIR comprises 534 challenging samples, each containing synthetically injected
errors across five reasoning-heavy categories: Factual Contradiction, Identity
Misattribution, Contextual Mismatch, Quantitative Discrepancy, and
Temporal/Spatial Incoherence. We evaluate six state-of-the-art MLLMs, showing
that models with dedicated multimodal reasoning capabilities, such as o1,
substantially outperform their counterparts while open-source models remain
particularly vulnerable to inconsistency errors. Detailed error analyses
further show that models excel in detecting inconsistencies confined to a
single modality, particularly in text, but struggle with cross-modal conflicts
and complex layouts. Probing experiments reveal that single-modality prompting,
including Chain-of-Thought (CoT) and Set-of-Mark (SoM) methods, yields marginal
gains, revealing a key bottleneck in cross-modal reasoning. Our findings
highlight the need for advanced multimodal reasoning and point to future
research on multimodal inconsistency.Summary
AI-Generated Summary