ChatPaper.aiChatPaper

隐于明处:探究多模态语言模型中的隐性推理能力

Hidden in Plain Sight: Probing Implicit Reasoning in Multimodal Language Models

May 30, 2025
作者: Qianqi Yan, Hongquan Li, Shan Jiang, Yang Zhao, Xinze Guan, Ching-Chen Kuo, Xin Eric Wang
cs.AI

摘要

多模态大语言模型(MLLMs)正越来越多地部署在开放、真实的环境中,这些环境中的输入信息往往杂乱无章、定义不清且未必可信。与精心设计的基准测试不同,这些场景中经常出现指令涉及缺失对象或矛盾事实、依赖模糊指代,或要求执行不可行操作的情况。在此类情形下,成功的关键不仅在于任务执行本身,更在于模型能否察觉那些未被言明的问题。本文系统分析了当前MLLMs如何处理这类隐含推理场景:即缺陷未被明确指出,而需通过上下文推断的情况。通过一套涵盖四类现实世界故障模式的诊断测试集,我们对包括o3和GPT-4o在内的六种MLLMs进行了评估,发现模型即便具备必要的感知与推理能力,也常常未能揭示隐藏问题。显式提示表明,这些底层能力确实存在,但往往被压制以迎合用户需求。我们进一步证明,简单的推理时干预措施,如谨慎的角色提示,特别是要求提出澄清问题,能显著提升模型表现。我们的研究揭示了当前MLLMs在推理能力与行为顺从性之间存在的持续差距,并提出了在约束不足环境中增强这些模型可信度的实用策略。
English
Multimodal large language models (MLLMs) are increasingly deployed in open-ended, real-world environments where inputs are messy, underspecified, and not always trustworthy. Unlike curated benchmarks, these settings frequently involve instructions that refer to missing objects or contradictory facts, rely on ambiguous references, or request infeasible actions. In such cases, success hinges not on task execution alone, but on a model's ability to detect when something is silently wrong. This paper presents a systematic analysis of how current MLLMs handle such implicit reasoning scenarios: cases where the flaw is not explicitly stated but must be inferred from context. Using a curated diagnostic suite spanning four categories of real-world failure modes, we evaluate six MLLMs, including o3 and GPT-4o, and find that models frequently fail to surface hidden issues, even when they possess the necessary perceptual and reasoning skills. Explicit prompting reveals that the underlying capabilities exist but are often suppressed in favor of user compliance. We further show that simple inference-time interventions, such as cautious persona prompting and, in particular, requiring a clarifying question, can dramatically recover performance. Our findings highlight a persistent gap between reasoning competence and behavioral compliance in current MLLMs and suggest practical strategies for making these models more trustworthy in underconstrained environments.
PDF21June 10, 2025