ChatPaper.aiChatPaper

可追溯證據增強視覺基礎推理:評估與方法論

Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology

July 10, 2025
作者: Haochen Wang, Xiangtai Li, Zilong Huang, Anran Wang, Jiacong Wang, Tao Zhang, Jiani Zheng, Sule Bai, Zijian Kang, Jiashi Feng, Zhuochen Wang, Zhaoxiang Zhang
cs.AI

摘要

诸如OpenAI-o3等模型通过动态引用视觉区域,开创了视觉基础推理的先河,恰似人类“以图像思考”的方式。然而,目前尚缺乏全面评估此类能力的基准。为填补这一空白,我们提出了TreeBench(可追踪证据评估基准),这一诊断性基准建立在三大原则之上:(1) 在复杂场景中对细微目标的聚焦视觉感知,(2) 通过边界框评估实现可追踪的证据,(3) 二阶推理以测试超越简单物体定位的对象交互与空间层级关系。我们优先选择包含密集物体的图像,最初从SA-1B中采样了1,000张高质量图片,并邀请八位LMM专家手动为每张图片标注问题、候选选项及答案。经过三个阶段的质控,TreeBench最终包含405对极具挑战性的视觉问答对,即便是最先进的模型也在此基准上表现挣扎,无一达到60%的准确率,例如OpenAI-o3仅得54.87分。此外,我们引入了TreeVGR(可追踪证据增强的视觉基础推理),一种结合强化学习共同监督定位与推理的训练范式,旨在实现精准定位与可解释的推理路径。基于Qwen2.5-VL-7B初始化,该范式在V* Bench(+16.8)、MME-RealWorld(+12.6)及TreeBench(+13.4)上均取得显著提升,证实了可追踪性对于推进视觉基础推理的关键作用。相关代码已发布于https://github.com/Haochen-Wang409/TreeVGR。
English
Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning. The code is available at https://github.com/Haochen-Wang409/TreeVGR.
PDF372July 11, 2025