ChatPaper.aiChatPaper

TopoPerception:大视觉语言模型中全局视觉感知的无捷径评估

TopoPerception: A Shortcut-Free Evaluation of Global Visual Perception in Large Vision-Language Models

November 14, 2025
作者: Wenhao Zhou, Hao Zheng, Rong Zhao
cs.AI

摘要

大型视觉语言模型(LVLM)通常将视觉编码器提取的特征与预训练大语言模型(LLM)对齐。然而,这种设计使视觉感知模块成为性能瓶颈,制约了模型的整体能力。传统评估基准虽富含视觉语义信息,但往往存在不可避免的局部捷径,可能导致高估模型的感知能力。本文提出TopoPerception基准,利用拓扑特性从多粒度层面严格评估LVLM的全局视觉感知能力。由于拓扑性质依赖于图像的整体结构且对局部特征具有不变性,该基准能实现无捷径的全局感知评估,与依赖语义的任务形成本质区别。我们在TopoPerception上测试了前沿模型,发现即使在最粗粒度层面,所有模型的表现均不优于随机猜测,揭示其全局视觉特征感知能力的严重缺失。值得注意的是,同系列模型呈现一致趋势:推理能力越强的模型准确率反而越低。这表明单纯扩大模型规模不仅无法弥补这一缺陷,甚至可能加剧问题。突破性进展可能需要新的训练范式或架构创新。TopoPerception不仅揭示了当前LVLM的关键瓶颈,更为提升其全局视觉感知能力提供了研究视角与改进方向。数据与代码已开源:https://github.com/Wenhao-Zhou/TopoPerception。
English
Large Vision-Language Models (LVLMs) typically align visual features from an encoder with a pre-trained Large Language Model (LLM). However, this makes the visual perception module a bottleneck, which constrains the overall capabilities of LVLMs. Conventional evaluation benchmarks, while rich in visual semantics, often contain unavoidable local shortcuts that can lead to an overestimation of models' perceptual abilities. Here, we introduce TopoPerception, a benchmark that leverages topological properties to rigorously evaluate the global visual perception capabilities of LVLMs across various granularities. Since topology depends on the global structure of an image and is invariant to local features, TopoPerception enables a shortcut-free assessment of global perception, fundamentally distinguishing it from semantically rich tasks. We evaluate state-of-the-art models on TopoPerception and find that even at the coarsest perceptual granularity, all models perform no better than random chance, indicating a profound inability to perceive global visual features. Notably, a consistent trend emerge within model families: more powerful models with stronger reasoning capabilities exhibit lower accuracy. This suggests that merely scaling up models is insufficient to address this deficit and may even exacerbate it. Progress may require new training paradigms or architectures. TopoPerception not only exposes a critical bottleneck in current LVLMs but also offers a lens and direction for improving their global visual perception. The data and code are publicly available at: https://github.com/Wenhao-Zhou/TopoPerception.
PDF12December 1, 2025