用语言解释视觉错觉:视觉-语言模型是否像人类一样感知错觉?
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?
October 31, 2023
作者: Yichi Zhang, Jiayi Pan, Yuchen Zhou, Rui Pan, Joyce Chai
cs.AI
摘要
视觉-语言模型(VLMs)是在人类模拟对世界的理解时捕获的大量数据上进行训练的。然而,人类对现实的感知并非始终忠实于物理世界,被称为视觉错觉。这引发了一个关键问题:VLMs是否会像人类一样产生类似的错觉,还是它们能够忠实地学习表示现实?为了调查这个问题,我们构建了一个包含五种类型视觉错觉的数据集,并制定了四项任务来检查最先进的VLMs中的视觉错觉。我们的研究结果表明,尽管整体对齐性较低,但更大的模型更接近人类感知,并更容易受到视觉错觉的影响。我们的数据集和初步发现将促进对人类和机器中的视觉错觉有更好的理解,并为未来能够更好地使人类和机器在感知和交流共享的视觉世界方面保持一致的计算模型奠定基础。代码和数据可在https://github.com/vl-illusion/dataset 上获得。
English
Vision-Language Models (VLMs) are trained on vast amounts of data captured by
humans emulating our understanding of the world. However, known as visual
illusions, human's perception of reality isn't always faithful to the physical
world. This raises a key question: do VLMs have the similar kind of illusions
as humans do, or do they faithfully learn to represent reality? To investigate
this question, we build a dataset containing five types of visual illusions and
formulate four tasks to examine visual illusions in state-of-the-art VLMs. Our
findings have shown that although the overall alignment is low, larger models
are closer to human perception and more susceptible to visual illusions. Our
dataset and initial findings will promote a better understanding of visual
illusions in humans and machines and provide a stepping stone for future
computational models that can better align humans and machines in perceiving
and communicating about the shared visual world. The code and data are
available at https://github.com/vl-illusion/dataset.