ChatPaper.aiChatPaper

SPARC:分離感知與推理迴路以實現視覺語言模型的測試時擴展

SPARC: Separating Perception And Reasoning Circuits for Test-time Scaling of VLMs

February 6, 2026
作者: Niccolo Avogaro, Nayanika Debnath, Li Mi, Thomas Frick, Junling Wang, Zexue He, Hang Hua, Konrad Schindler, Mattia Rigotti
cs.AI

摘要

尽管近期取得了一些成功,但测试时动态扩展(即在推理过程中根据需要动态增加令牌预算)对于视觉语言模型(VLM)而言仍显脆弱:关于图像的非结构化思维链会将感知与推理纠缠在一起,导致生成冗长混乱的上下文,其中微小的感知错误可能级联成完全错误的答案。此外,需要采用人工设计奖励的昂贵强化学习才能实现良好性能。本文提出SPARC(分离感知与推理电路),一种显式解耦视觉感知与推理的模块化框架。受大脑中序列化感觉-认知处理机制的启发,SPARC采用两阶段流程:模型先执行显式视觉搜索以定位问题相关区域,随后基于这些区域进行条件化推理生成最终答案。这种分离机制支持非对称计算分配的独立测试时扩展(例如在分布偏移时优先处理感知阶段),可实现选择性优化(例如当感知阶段成为端到端性能瓶颈时单独优化该阶段),并能通过低分辨率全局搜索配合仅对选定区域进行高分辨率处理来压缩上下文,从而减少视觉令牌总量与计算开销。在多项挑战性视觉推理基准测试中,SPARC均优于单体基线模型和强视觉定位方法。例如在V^* VQA基准上,SPARC将Qwen3VL-4B的准确率提升6.7个百分点;在挑战性OOD任务中,其表现较"基于图像的思维"方法高出4.6个百分点,而所需令牌预算仅为后者的1/200。
English
Despite recent successes, test-time scaling - i.e., dynamically expanding the token budget during inference as needed - remains brittle for vision-language models (VLMs): unstructured chains-of-thought about images entangle perception and reasoning, leading to long, disorganized contexts where small perceptual mistakes may cascade into completely wrong answers. Moreover, expensive reinforcement learning with hand-crafted rewards is required to achieve good performance. Here, we introduce SPARC (Separating Perception And Reasoning Circuits), a modular framework that explicitly decouples visual perception from reasoning. Inspired by sequential sensory-to-cognitive processing in the brain, SPARC implements a two-stage pipeline where the model first performs explicit visual search to localize question-relevant regions, then conditions its reasoning on those regions to produce the final answer. This separation enables independent test-time scaling with asymmetric compute allocation (e.g., prioritizing perceptual processing under distribution shift), supports selective optimization (e.g., improving the perceptual stage alone when it is the bottleneck for end-to-end performance), and accommodates compressed contexts by running global search at lower image resolutions and allocating high-resolution processing only to selected regions, thereby reducing total visual tokens count and compute. Across challenging visual reasoning benchmarks, SPARC outperforms monolithic baselines and strong visual-grounding approaches. For instance, SPARC improves the accuracy of Qwen3VL-4B on the V^* VQA benchmark by 6.7 percentage points, and it surpasses "thinking with images" by 4.6 points on a challenging OOD task despite requiring a 200times lower token budget.
PDF32March 16, 2026