ChatPaper.aiChatPaper

视觉按需交互:通过稀疏动态选择的视觉语言互动提升VLLM效率

VISion On Request: Enhanced VLLM efficiency with sparse, dynamically selected, vision-language interactions

March 24, 2026
作者: Adrian Bulat, Alberto Baldrati, Ioannis Maniadis Metaxas, Yassine Ouali, Georgios Tzimiropoulos
cs.AI

摘要

现有提升大型视觉语言模型(LVLMs)效率的方法主要基于视觉令牌压缩的概念。然而,这种方法会形成信息瓶颈,损害模型性能,尤其在需要细粒度理解和推理的复杂任务上表现更为明显。本研究通过引入"按需视觉机制"(VISOR)对这一范式提出挑战,该方法能在保留完整视觉信息的前提下降低推理成本。VISOR并非压缩图像,而是通过稀疏化图像与文本令牌间的交互来提升效率。具体而言,语言模型通过少量精心布局的注意力层处理全量高分辨率视觉令牌:基础视觉上下文由文本-图像间的高效交叉注意力提供,而少数动态选择的自注意力层则对视觉表征本身进行精细化处理,从而在需要时实现复杂的高分辨率推理。基于此原理,我们首先通过调整自注意力层数量,训练出适用于不同计算预算的通用网络,随后引入轻量级策略机制,根据样本复杂度动态分配视觉计算资源。大量实验表明,VISOR在显著降低计算成本的同时,在一系列多样化基准测试中达到或超越了现有最优结果,并在需要精细视觉理解的挑战性任务中表现卓越。
English
Existing approaches for improving the efficiency of Large Vision-Language Models (LVLMs) are largely based on the concept of visual token reduction. This approach, however, creates an information bottleneck that impairs performance, especially on challenging tasks that require fine-grained understanding and reasoning. In this work, we challenge this paradigm by introducing VISion On Request (VISOR), a method that reduces inference cost without discarding visual information. Instead of compressing the image, VISOR improves efficiency by sparsifying the interaction between image and text tokens. Specifically, the language model attends to the full set of high-resolution visual tokens through a small, strategically placed set of attention layers: general visual context is provided by efficient cross-attention between text-image, while a few well-placed and dynamically selected self-attention layers refine the visual representations themselves, enabling complex, high-resolution reasoning when needed. Based on this principle, we first train a single universal network on a range of computational budgets by varying the number of self-attention layers, and then introduce a lightweight policy mechanism that dynamically allocates visual computation based on per-sample complexity. Extensive experiments show that VISOR drastically reduces computational cost while matching or exceeding state-of-the-art results across a diverse suite of benchmarks, and excels in challenging tasks that require detailed visual understanding.
PDF31March 26, 2026