ChatPaper.aiChatPaper

饕餮盛宴:多模态大型语言模型的分辨率混合适应

Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models

March 5, 2024
作者: Gen Luo, Yiyi Zhou, Yuxin Zhang, Xiawu Zheng, Xiaoshuai Sun, Rongrong Ji
cs.AI

摘要

尽管现有的多模态大型语言模型(MLLMs)取得了显著进展,但在细粒度视觉识别方面仍然表现不佳。与以往研究相反,我们从图像分辨率的角度研究了这一问题,并揭示了低分辨率和高分辨率视觉特征的组合可以有效地缓解这一缺点。基于这一观察,我们提出了一种新颖高效的方法,用于MLLMs,名为分辨率混合适应(MRA)。具体而言,MRA为具有不同分辨率的图像采用两个视觉路径,其中高分辨率视觉信息通过新颖的分辨率混合适配器(MR-Adapters)嵌入到低分辨率路径中。这种设计还极大地减少了MLLMs的输入序列长度。为了验证MRA,我们将其应用于最近的一个MLLM,名为LLaVA,并将新模型称为LLaVA-HR。我们在11个视觉-语言(VL)任务上进行了大量实验,结果显示LLaVA-HR在8个VL任务上优于现有的MLLMs,例如在TextVQA上提高了+9.4%。更重要的是,LLaVA-HR的训练和推断在MRA的帮助下仍然高效,例如,相比LLaVA-1.5,训练时间减少至20小时,推断速度提高了3倍。源代码已发布在:https://github.com/luogen1996/LLaVA-HR。
English
Despite remarkable progress, existing multimodal large language models (MLLMs) are still inferior in granular visual recognition. Contrary to previous works, we study this problem from the perspective of image resolution, and reveal that a combination of low- and high-resolution visual features can effectively mitigate this shortcoming. Based on this observation, we propose a novel and efficient method for MLLMs, termed Mixture-of-Resolution Adaptation (MRA). In particular, MRA adopts two visual pathways for images with different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the novel mixture-of-resolution adapters (MR-Adapters). This design also greatly reduces the input sequence length of MLLMs. To validate MRA, we apply it to a recent MLLM called LLaVA, and term the new model LLaVA-HR. We conduct extensive experiments on 11 vision-language (VL) tasks, which show that LLaVA-HR outperforms existing MLLMs on 8 VL tasks, e.g., +9.4% on TextVQA. More importantly, both training and inference of LLaVA-HR remain efficient with MRA, e.g., 20 training hours and 3times inference speed than LLaVA-1.5. Source codes are released at: https://github.com/luogen1996/LLaVA-HR.
PDF111December 15, 2024