ChatPaper.aiChatPaper

SpecEyes:透過推測性感知與規劃加速具能動性的多模態大型語言模型

SpecEyes: Accelerating Agentic Multimodal LLMs via Speculative Perception and Planning

March 24, 2026
作者: Haoyu Huang, Jinfa Huang, Zhongwei Wan, Xiawu Zheng, Rongrong Ji, Jiebo Luo
cs.AI

摘要

具備代理能力的多模態大型語言模型(如OpenAI o3和Gemini Agentic Vision)通過迭代式視覺工具調用實現了卓越的推理能力。然而,級聯式的感知、推理與工具調用循環會帶來顯著的序列化開銷。這種被稱為「代理深度」的開銷會產生過高的延遲,嚴重限制系統層級的並發性能。為此,我們提出SpecEyes——一個代理級別的推測加速框架,旨在突破此序列化瓶頸。我們的核心洞見在於:輕量級的無工具型多模態大型語言模型可作為推測規劃器,預測執行軌跡,從而在不犧牲準確性的前提下提前終止高成本工具鏈。為規範此推測規劃過程,我們引入基於答案可分離性的認知門控機制,該機制無需依賴真實標籤即可量化模型的自驗證置信度。此外,我們設計了異構並行漏斗架構,利用小模型的無狀態並發特性來掩蓋大模型的帶狀態串行執行過程,從而最大化系統吞吐量。在V* Bench、HR-Bench和POPE上的大量實驗表明,SpecEyes相較於代理基準實現了1.1-3.35倍的加速效果,同時保持甚至提升了準確率(最高提升6.7%),從而在並發工作負載下顯著提升服務吞吐量。
English
Agentic multimodal large language models (MLLMs) (e.g., OpenAI o3 and Gemini Agentic Vision) achieve remarkable reasoning capabilities through iterative visual tool invocation. However, the cascaded perception, reasoning, and tool-calling loops introduce significant sequential overhead. This overhead, termed agentic depth, incurs prohibitive latency and seriously limits system-level concurrency. To this end, we propose SpecEyes, an agentic-level speculative acceleration framework that breaks this sequential bottleneck. Our key insight is that a lightweight, tool-free MLLM can serve as a speculative planner to predict the execution trajectory, enabling early termination of expensive tool chains without sacrificing accuracy. To regulate this speculative planning, we introduce a cognitive gating mechanism based on answer separability, which quantifies the model's confidence for self-verification without requiring oracle labels. Furthermore, we design a heterogeneous parallel funnel that exploits the stateless concurrency of the small model to mask the stateful serial execution of the large model, maximizing system throughput. Extensive experiments on V* Bench, HR-Bench, and POPE demonstrate that SpecEyes achieves 1.1-3.35x speedup over the agentic baseline while preserving or even improving accuracy (up to +6.7%), thereby boosting serving throughput under concurrent workloads.
PDF422March 26, 2026