ChatPaper.aiChatPaper

INF-LLaVA:双视角感知用于高分辨率多模态大语言模型

INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model

July 23, 2024
作者: Yiwei Ma, Zhibin Wang, Xiaoshuai Sun, Weihuang Lin, Qiang Zhou, Jiayi Ji, Rongrong Ji
cs.AI

摘要

随着数据可用性和计算资源的进步,多模态大型语言模型(MLLMs)展示了在各个领域的能力。然而,在MLLMs中视觉编码器的二次复杂度限制了输入图像的分辨率。目前大多数方法通过将高分辨率图像裁剪成较小的子图像来缓解这个问题,然后由视觉编码器独立处理这些子图像。尽管捕捉了足够的局部细节,但这些子图像缺乏全局上下文,并且无法相互交互。为了解决这一局限性,我们提出了一种新颖的MLLM,INF-LLaVA,旨在有效地感知高分辨率图像。INF-LLaVA包含两个创新组件。首先,我们引入了双视角裁剪模块(DCM),确保每个子图像既包含来自局部视角的连续细节,又包含来自全局视角的综合信息。其次,我们引入了双视角增强模块(DEM),以实现全局和局部特征的相互增强,使INF-LLaVA能够通过同时捕获详细的局部信息和全面的全局上下文来有效处理高分辨率图像。广泛的消融研究验证了这些组件的有效性,并在各种基准测试上的实验表明,INF-LLaVA优于现有的MLLMs。代码和预训练模型可在https://github.com/WeihuangLin/INF-LLaVA找到。
English
With advancements in data availability and computing resources, Multimodal Large Language Models (MLLMs) have showcased capabilities across various fields. However, the quadratic complexity of the vision encoder in MLLMs constrains the resolution of input images. Most current approaches mitigate this issue by cropping high-resolution images into smaller sub-images, which are then processed independently by the vision encoder. Despite capturing sufficient local details, these sub-images lack global context and fail to interact with one another. To address this limitation, we propose a novel MLLM, INF-LLaVA, designed for effective high-resolution image perception. INF-LLaVA incorporates two innovative components. First, we introduce a Dual-perspective Cropping Module (DCM), which ensures that each sub-image contains continuous details from a local perspective and comprehensive information from a global perspective. Second, we introduce Dual-perspective Enhancement Module (DEM) to enable the mutual enhancement of global and local features, allowing INF-LLaVA to effectively process high-resolution images by simultaneously capturing detailed local information and comprehensive global context. Extensive ablation studies validate the effectiveness of these components, and experiments on a diverse set of benchmarks demonstrate that INF-LLaVA outperforms existing MLLMs. Code and pretrained model are available at https://github.com/WeihuangLin/INF-LLaVA.

Summary

AI-Generated Summary

PDF133November 28, 2024