ChatPaper.aiChatPaper

视觉按需调用:通过稀疏动态选择视觉语言交互增强VLLM效率

VISion On Request: Enhanced VLLM efficiency with sparse, dynamically selected, vision-language interactions

March 24, 2026
作者: Adrian Bulat, Alberto Baldrati, Ioannis Maniadis Metaxas, Yassine Ouali, Georgios Tzimiropoulos
cs.AI

摘要

现有提升大型视觉语言模型(LVLM)效率的方法主要基于视觉标记缩减的概念。然而,这种方法会形成信息瓶颈,损害模型性能,尤其是在需要细粒度理解和推理的复杂任务上。本研究通过引入"按需视觉机制"(VISOR)对这一范式提出挑战,该方法能在不丢弃视觉信息的前提下降低推理成本。VISOR并非压缩图像,而是通过稀疏化图像与文本标记的交互来提升效率。具体而言,语言模型通过少量精心布局的注意力层处理完整的高分辨率视觉标记:通用视觉上下文由文本-图像间的高效交叉注意力提供,而少数动态选择的精确定位自注意力层则对视觉表征本身进行细化,在需要时实现复杂的高分辨率推理。基于此原理,我们首先通过调整自注意力层数量,训练出适用于不同计算预算的通用网络,继而引入轻量级策略机制,根据样本复杂度动态分配视觉计算资源。大量实验表明,VISOR在显著降低计算成本的同时,在多样化基准测试中达到或超越了现有最优结果,并在需要精细视觉理解的挑战性任务中表现卓越。
English
Existing approaches for improving the efficiency of Large Vision-Language Models (LVLMs) are largely based on the concept of visual token reduction. This approach, however, creates an information bottleneck that impairs performance, especially on challenging tasks that require fine-grained understanding and reasoning. In this work, we challenge this paradigm by introducing VISion On Request (VISOR), a method that reduces inference cost without discarding visual information. Instead of compressing the image, VISOR improves efficiency by sparsifying the interaction between image and text tokens. Specifically, the language model attends to the full set of high-resolution visual tokens through a small, strategically placed set of attention layers: general visual context is provided by efficient cross-attention between text-image, while a few well-placed and dynamically selected self-attention layers refine the visual representations themselves, enabling complex, high-resolution reasoning when needed. Based on this principle, we first train a single universal network on a range of computational budgets by varying the number of self-attention layers, and then introduce a lightweight policy mechanism that dynamically allocates visual computation based on per-sample complexity. Extensive experiments show that VISOR drastically reduces computational cost while matching or exceeding state-of-the-art results across a diverse suite of benchmarks, and excels in challenging tasks that require detailed visual understanding.
PDF31March 26, 2026