可追溯证据增强的视觉基础推理:评估与方法论
Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology
July 10, 2025
作者: Haochen Wang, Xiangtai Li, Zilong Huang, Anran Wang, Jiacong Wang, Tao Zhang, Jiani Zheng, Sule Bai, Zijian Kang, Jiashi Feng, Zhuochen Wang, Zhaoxiang Zhang
cs.AI
摘要
诸如OpenAI-o3等模型通过动态引用视觉区域,开创了视觉基础推理的新领域,正如人类“用图像思考”一样。然而,目前尚缺乏全面评估这些能力的基准。为填补这一空白,我们提出了TreeBench(可追踪证据评估基准),这一诊断性基准建立在三大原则之上:(1) 在复杂场景中对细微目标的专注视觉感知,(2) 通过边界框评估实现可追踪的证据,(3) 二阶推理以测试超越简单物体定位的对象交互与空间层次关系。我们优先选择包含密集物体的图像,最初从SA-1B中采样了1,000张高质量图片,并邀请八位LMM专家手动为每张图片标注问题、候选选项及答案。经过三重质量控制阶段,TreeBench最终包含405对极具挑战性的视觉问答对,即便是最先进的模型也在此基准上表现不佳,无一达到60%的准确率,例如OpenAI-o3仅得54.87分。此外,我们引入了TreeVGR(可追踪证据增强的视觉基础推理),一种结合强化学习共同监督定位与推理的训练范式,旨在实现精准定位与可解释的推理路径。基于Qwen2.5-VL-7B初始化,该范式在V* Bench(+16.8)、MME-RealWorld(+12.6)及TreeBench(+13.4)上均取得显著提升,证明可追踪性是推动视觉基础推理进步的关键。代码已发布于https://github.com/Haochen-Wang409/TreeVGR。
English
Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically
referencing visual regions, just like human "thinking with images". However, no
benchmark exists to evaluate these capabilities holistically. To bridge this
gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a
diagnostic benchmark built on three principles: (1) focused visual perception
of subtle targets in complex scenes, (2) traceable evidence via bounding box
evaluation, and (3) second-order reasoning to test object interactions and
spatial hierarchies beyond simple object localization. Prioritizing images with
dense objects, we initially sample 1K high-quality images from SA-1B, and
incorporate eight LMM experts to manually annotate questions, candidate
options, and answers for each image. After three stages of quality control,
TreeBench consists of 405 challenging visual question-answering pairs, even the
most advanced models struggle with this benchmark, where none of them reach 60%
accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR
(Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to
supervise localization and reasoning jointly with reinforcement learning,
enabling accurate localizations and explainable reasoning pathways. Initialized
from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and
TreeBench (+13.4), proving traceability is key to advancing vision-grounded
reasoning. The code is available at https://github.com/Haochen-Wang409/TreeVGR.