OmniSpatial:迈向视觉语言模型全面空间推理的基准测试
OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models
June 3, 2025
作者: Mengdi Jia, Zekun Qi, Shaochen Zhang, Wenyao Zhang, Xinqiang Yu, Jiawei He, He Wang, Li Yi
cs.AI
摘要
空间推理是认知心理学的一个关键方面,也是当前视觉-语言模型(VLMs)面临的主要瓶颈。尽管已有大量研究致力于评估或提升VLMs对基本空间关系的理解,如区分左右、远近以及物体计数,这些任务仅代表了空间推理的最基础层次。在本研究中,我们引入了OmniSpatial,一个基于认知心理学的全面且具有挑战性的空间推理基准。OmniSpatial涵盖了四大类别:动态推理、复杂空间逻辑、空间交互及视角转换,细分为50个子类别。通过互联网数据爬取与精细的人工标注,我们构建了超过1500个问答对。广泛的实验表明,无论是开源还是闭源的VLMs,以及现有的推理与空间理解模型,在全面空间理解方面均表现出显著局限。我们进一步分析了失败案例,并提出了未来研究的潜在方向。
English
Spatial reasoning is a key aspect of cognitive psychology and remains a major
bottleneck for current vision-language models (VLMs). While extensive research
has aimed to evaluate or improve VLMs' understanding of basic spatial
relations, such as distinguishing left from right, near from far, and object
counting, these tasks represent only the most fundamental level of spatial
reasoning. In this work, we introduce OmniSpatial, a comprehensive and
challenging benchmark for spatial reasoning, grounded in cognitive psychology.
OmniSpatial covers four major categories: dynamic reasoning, complex spatial
logic, spatial interaction, and perspective-taking, with 50 fine-grained
subcategories. Through Internet data crawling and careful manual annotation, we
construct over 1.5K question-answer pairs. Extensive experiments show that both
open- and closed-source VLMs, as well as existing reasoning and spatial
understanding models, exhibit significant limitations in comprehensive spatial
understanding. We further analyze failure cases and propose potential
directions for future research.Summary
AI-Generated Summary