ChatPaper.aiChatPaper

着色框架:问题框架设计遮蔽视觉语言模型的视野

Tinted Frames: Question Framing Blinds Vision-Language Models

March 19, 2026
作者: Wan-Cyuan Fan, Jiayun Luo, Declan Kutscher, Leonid Sigal, Ritwik Gupta
cs.AI

摘要

视觉语言模型(VLMs)已被证明存在视觉盲区,即使在需要视觉推理的任务中,也常常未能充分利用其视觉输入。本研究揭示了VLMs具有选择性视觉盲区的特性:即使在不同语言框架要求相同视觉推理的情况下,它们也会根据语言表述方式调节对视觉输入的关注程度。通过以视觉注意力为探测工具,我们量化了表述框架如何改变对图像关注的程度和分布。与开放式框架相比,约束性框架(如多选题和是非题)会显著降低对图像上下文的关注度,削弱对任务相关区域的聚焦,并将注意力转向无意义的标记。我们进一步证明这种注意力错配是导致准确率下降和跨框架不一致性的主要原因。基于这一机制性发现,我们提出一种轻量级提示调优方法,通过可学习标记词激发开放式场景中观察到的鲁棒性视觉注意力模式,从而增强视觉基础能力并提升跨框架性能。
English
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.
PDF131March 21, 2026