着色框架:问题框架遮蔽视觉语言模型的视野
Tinted Frames: Question Framing Blinds Vision-Language Models
March 19, 2026
作者: Wan-Cyuan Fan, Jiayun Luo, Declan Kutscher, Leonid Sigal, Ritwik Gupta
cs.AI
摘要
视觉语言模型(VLMs)已被证明存在视觉盲区,即使在需要视觉推理的任务中也往往未能充分利用视觉输入。本研究发现,VLMs具有选择性视觉盲区——它们会根据语言表述框架调整对视觉输入的关注程度,即便不同表述框架所需的视觉推理过程完全相同。通过视觉注意力机制作为探测工具,我们量化分析了表述框架如何改变对图像关注的程度和分布。受限表述框架(如多项选择和是非题)相较于开放式框架,会导致对图像上下文关注度显著降低、任务相关区域聚焦减弱,并将注意力转向无信息量的语义单元。我们进一步证明,这种注意力错配是导致准确率下降和跨框架表现不一致的主要原因。基于这一机制性发现,我们提出一种采用可学习语义单元的轻量级提示调优方法,该方法能促进模型建立开放式框架中观察到的鲁棒性视觉注意力模式,从而增强视觉基础能力并提升跨框架性能。
English
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.