ChatPaper.aiChatPaper

DUET-VLM:面向视觉语言模型训练与推理的双阶段统一高效令牌缩减

DUET-VLM: Dual stage Unified Efficient Token reduction for VLM Training and Inference

February 21, 2026
作者: Aditya Kumar Singh, Hitesh Kandala, Pratik Prabhanjan Brahma, Zicheng Liu, Emad Barsoum
cs.AI

摘要

视觉语言模型(VLMs)虽已具备卓越的多模态理解与推理能力,但由于密集的视觉标记处理机制,其计算成本依然高昂。现有高效化方法通常通过合并冗余视觉标记或在语言主干网络中逐步丢弃标记来提升速度,但往往以牺牲精度为代价。本文提出DUET-VLM——一种通用即插即用的双阶段压缩框架,其包含:(a)在视觉编码器输出端进行仅视觉端的冗余感知压缩,生成信息保留型标记;(b)在语言主干网络中实施分层级的文本引导显著性丢弃策略,逐步剔除低信息量视觉标记。这种协同标记管理机制在实现激进压缩的同时保留了关键语义。在LLaVA-1.5-7B模型上,本方法仅用33%的标记量即可维持基线模型99%以上的精度,即使在标记减少89%的极端情况下仍能保持>97%的精度。通过训练期间的双阶段压缩,模型在标记减少67%时达到99.7%的精度,减少89%时仍保持97.6%的精度,在多基准测试中超越现有视觉标记压缩技术。当集成至Video-LLaVA-7B时,该方法甚至超越基线性能——在标记减少53.1%的情况下实现>100%的基准精度,并在标记减少93.4%的极端设定下保持97.6%的精度。这些结果表明,通过DUET-VLM的端到端训练,模型能在不损失精度的前提下稳健适应压缩后的视觉(图像/视频)输入,在相同计算预算下生成紧凑而语义丰富的表征。代码已开源:https://github.com/AMD-AGI/DUET-VLM。
English
Vision-language models (VLMs) have achieved remarkable multimodal understanding and reasoning capabilities, yet remain computationally expensive due to dense visual tokenization. Existing efficiency approaches either merge redundant visual tokens or drop them progressively in language backbone, often trading accuracy for speed. In this work, we propose DUET-VLM, a versatile plug-and-play dual compression framework that consists of (a) vision-only redundancy aware compression of vision encoder's output into information-preserving tokens, followed by (b) layer-wise, salient text-guided dropping of visual tokens within the language backbone to progressively prune less informative tokens. This coordinated token management enables aggressive compression while retaining critical semantics. On LLaVA-1.5-7B, our approach maintains over 99% of baseline accuracy with 67% fewer tokens, and still retains >97% even at 89% reduction. With this dual-stage compression during training, it achieves 99.7% accuracy at 67% and 97.6% at 89%, surpassing prior SoTA visual token reduction methods across multiple benchmarks. When integrated into Video-LLaVA-7B, it even surpasses the baseline -- achieving >100% accuracy with a substantial 53.1% token reduction and retaining 97.6% accuracy under an extreme 93.4% setting. These results highlight end-to-end training with DUET-VLM, enabling robust adaptation to reduced visual (image/video) input without sacrificing accuracy, producing compact yet semantically rich representations within the same computational budget. Our code is available at https://github.com/AMD-AGI/DUET-VLM.
PDF32March 7, 2026