ChatPaper.aiChatPaper

TokenPacker:多模态LLM的高效视觉投影仪

TokenPacker: Efficient Visual Projector for Multimodal LLM

July 2, 2024
作者: Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu, Lei Zhang
cs.AI

摘要

视觉投影仪在多模态大语言模型(MLLM)中充当视觉编码器和大型语言模型(LLM)之间的重要桥梁。通常,MLLM采用简单的多层感知器(MLP)通过一对一转换来保留所有视觉上下文。然而,处理高分辨率图像时,视觉标记是冗余的,可能会大大增加,从而严重影响MLLM的效率。一些最近的研究引入了重新采样器或抽象器来减少生成的视觉标记数量。然而,它们未能捕捉更细微的细节,并削弱了MLLM的视觉推理能力。在这项工作中,我们提出了一种新颖的视觉投影方案,采用粗到细的方案将丰富的特征注入,生成精简的视觉标记。具体而言,我们首先将视觉特征插值为低分辨率点查询,提供整体视觉表示作为基础。然后,我们引入了一个区域到点注入模块,利用高分辨率、多级区域为基础的线索作为细粒度的参考键和值,使其完全被相应的局部上下文区域吸收。这一步有效地更新了粗糙的点查询,将其转换为一个丰富的查询,用于后续的LLM推理。大量实验证明,我们的方法将视觉标记压缩了75%~89%,同时在各种基准测试中实现了可比甚至更好的性能,且效率显著提高。源代码可在https://github.com/CircleRadon/TokenPacker找到。
English
The visual projector serves as an essential bridge between the visual encoder and the Large Language Model (LLM) in a Multimodal LLM (MLLM). Typically, MLLMs adopt a simple MLP to preserve all visual contexts via one-to-one transformation. However, the visual tokens are redundant and can be considerably increased when dealing with high-resolution images, impairing the efficiency of MLLMs significantly. Some recent works have introduced resampler or abstractor to reduce the number of resulting visual tokens. Unfortunately, they fail to capture finer details and undermine the visual reasoning capabilities of MLLMs. In this work, we propose a novel visual projector, which adopts a coarse-to-fine scheme to inject the enriched characteristics to generate the condensed visual tokens. In specific, we first interpolate the visual features as a low-resolution point query, providing the overall visual representation as the foundation. Then, we introduce a region-to-point injection module that utilizes high-resolution, multi-level region-based cues as fine-grained reference keys and values, allowing them to be fully absorbed within the corresponding local context region. This step effectively updates the coarse point query, transforming it into an enriched one for the subsequent LLM reasoning. Extensive experiments demonstrate that our approach compresses the visual tokens by 75%~89%, while achieves comparable or even better performance across diverse benchmarks with significantly higher efficiency. The source codes can be found at https://github.com/CircleRadon/TokenPacker.

Summary

AI-Generated Summary

PDF244November 28, 2024