视觉思维链:通过连续视觉标记提升视觉语言模型的观察与推理能力
Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens
November 24, 2025
作者: Yiming Qin, Bomin Wei, Jiaxin Ge, Konstantinos Kallidromitis, Stephanie Fu, Trevor Darrell, Xudong Wang
cs.AI
摘要
视觉语言模型(VLMs)在语言空间推理方面表现出色,但在需要密集视觉感知的认知理解(如空间推理与几何意识)方面仍存在不足。这一局限源于当前视觉语言模型缺乏跨空间维度捕捉密集视觉信息的有效机制。我们提出视觉思维链(COVT)框架,使视觉语言模型不仅能通过语言推理,还能借助连续视觉标记——一种编码丰富感知线索的紧凑潜在表征。在约20个标记的有限预算内,COVT从轻量级视觉专家模型中蒸馏知识,捕获二维外观、三维几何、空间布局和边缘结构等互补特性。训练过程中,搭载COVT的视觉语言模型通过自回归预测这些视觉标记,以重建密集监督信号(如深度图、分割图、边缘图和DINO特征)。推理阶段,模型直接在连续视觉标记空间中进行推理,在保持高效性的同时可选择性解码密集预测结果以提升可解释性。在涵盖CV-Bench、MMVP、RealWorldQA、MMStar、WorldMedQA、HRBench等十余个多样化感知基准测试中,将COVT集成至Qwen2.5-VL和LLaVA等强视觉语言模型后,性能持续提升3%至16%,证明紧凑的连续视觉思维能实现更精准、可落地且可解释的多模态智能。
English
Vision-Language Models (VLMs) excel at reasoning in linguistic space but struggle with perceptual understanding that requires dense visual perception, e.g., spatial reasoning and geometric awareness. This limitation stems from the fact that current VLMs have limited mechanisms to capture dense visual information across spatial dimensions. We introduce Chain-of-Visual-Thought (COVT), a framework that enables VLMs to reason not only in words but also through continuous visual tokens-compact latent representations that encode rich perceptual cues. Within a small budget of roughly 20 tokens, COVT distills knowledge from lightweight vision experts, capturing complementary properties such as 2D appearance, 3D geometry, spatial layout, and edge structure. During training, the VLM with COVT autoregressively predicts these visual tokens to reconstruct dense supervision signals (e.g., depth, segmentation, edges, and DINO features). At inference, the model reasons directly in the continuous visual token space, preserving efficiency while optionally decoding dense predictions for interpretability. Evaluated across more than ten diverse perception benchmarks, including CV-Bench, MMVP, RealWorldQA, MMStar, WorldMedQA, and HRBench, integrating COVT into strong VLMs such as Qwen2.5-VL and LLaVA consistently improves performance by 3% to 16% and demonstrates that compact continuous visual thinking enables more precise, grounded, and interpretable multimodal intelligence.