ChatPaper.aiChatPaper

TokenPacker:用於多模態LLM的高效視覺投影儀

TokenPacker: Efficient Visual Projector for Multimodal LLM

July 2, 2024
作者: Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu, Lei Zhang
cs.AI

摘要

在多模式大型語言模型(MLLM)中,視覺投影器在視覺編碼器和大型語言模型(LLM)之間扮演著重要的橋樑角色。通常,MLLM採用簡單的多層感知器(MLP)通過一對一轉換來保留所有視覺上下文。然而,處理高分辨率圖像時,視覺標記是冗餘的並且可能會大幅增加,嚴重影響MLLM的效率。一些最近的研究引入了重新取樣器或摘要器來減少生成的視覺標記數量。不幸的是,它們無法捕捉更細微的細節,並削弱了MLLM的視覺推理能力。在這項工作中,我們提出了一種新穎的視覺投影器,採用從粗到細的方案,將豐富的特徵注入以生成簡化的視覺標記。具體而言,我們首先將視覺特徵插值為低分辨率點查詢,提供整體視覺表示作為基礎。然後,我們引入了一個區域到點的注入模塊,利用高分辨率、多級區域為細粒度參考鍵和值,使其完全被相應的局部上下文區域吸收。這一步有效地更新了粗糙的點查詢,將其轉換為豐富的查詢,以進行後續的LLM推理。大量實驗表明,我們的方法將視覺標記壓縮了75%~89%,同時在各種基準測試中實現了可比甚至更好的性能,並具有更高的效率。源代碼可在https://github.com/CircleRadon/TokenPacker 找到。
English
The visual projector serves as an essential bridge between the visual encoder and the Large Language Model (LLM) in a Multimodal LLM (MLLM). Typically, MLLMs adopt a simple MLP to preserve all visual contexts via one-to-one transformation. However, the visual tokens are redundant and can be considerably increased when dealing with high-resolution images, impairing the efficiency of MLLMs significantly. Some recent works have introduced resampler or abstractor to reduce the number of resulting visual tokens. Unfortunately, they fail to capture finer details and undermine the visual reasoning capabilities of MLLMs. In this work, we propose a novel visual projector, which adopts a coarse-to-fine scheme to inject the enriched characteristics to generate the condensed visual tokens. In specific, we first interpolate the visual features as a low-resolution point query, providing the overall visual representation as the foundation. Then, we introduce a region-to-point injection module that utilizes high-resolution, multi-level region-based cues as fine-grained reference keys and values, allowing them to be fully absorbed within the corresponding local context region. This step effectively updates the coarse point query, transforming it into an enriched one for the subsequent LLM reasoning. Extensive experiments demonstrate that our approach compresses the visual tokens by 75%~89%, while achieves comparable or even better performance across diverse benchmarks with significantly higher efficiency. The source codes can be found at https://github.com/CircleRadon/TokenPacker.

Summary

AI-Generated Summary

PDF244November 28, 2024