ADEM-VL:自适应嵌入式融合以实现高效的视觉-语言调整
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning
October 23, 2024
作者: Zhiwei Hao, Jianyuan Guo, Li Shen, Yong Luo, Han Hu, Yonggang Wen
cs.AI
摘要
最近在多模态融合领域取得了显著进展,视觉语言(VL)模型取得了令人瞩目的成功,在诸如图像描述和视觉问答等多模态应用中表现出色。然而,构建VL模型需要大量的硬件资源,效率受到两个关键因素的限制:语言模型与视觉特征的扩展输入序列需要更多的计算操作,大量的可学习参数增加了内存复杂性。这些挑战显著限制了这类模型的广泛适用性。为了弥合这一差距,我们提出了ADEM-VL,一种高效的视觉语言方法,通过采用基于预训练大型语言模型(LLMs)的无参数交叉注意力机制来调整VL模型,用于多模态融合中的相似度测量。这种方法只需要将视觉特征嵌入到语言空间中,显著减少了可训练参数的数量,加快了训练和推理速度。为了增强融合模块中的表示学习,我们引入了一种高效的多尺度特征生成方案,只需要通过一次视觉编码器的前向传递。此外,我们提出了一种自适应融合方案,根据每个文本标记的注意力分数动态丢弃不太相关的视觉信息,确保融合过程优先考虑最相关的视觉特征。通过在包括视觉问答、图像描述和遵循指令等各种任务上进行实验,我们展示了我们的框架优于现有方法。具体而言,我们的方法在ScienceQA数据集上的平均准确率比现有方法高出0.77%,同时减少了训练和推理延迟,展示了我们框架的优越性。代码可在https://github.com/Hao840/ADEM-VL找到。
English
Recent advancements in multimodal fusion have witnessed the remarkable
success of vision-language (VL) models, which excel in various multimodal
applications such as image captioning and visual question answering. However,
building VL models requires substantial hardware resources, where efficiency is
restricted by two key factors: the extended input sequence of the language
model with vision features demands more computational operations, and a large
number of additional learnable parameters increase memory complexity. These
challenges significantly restrict the broader applicability of such models. To
bridge this gap, we propose ADEM-VL, an efficient vision-language method that
tunes VL models based on pretrained large language models (LLMs) by adopting a
parameter-free cross-attention mechanism for similarity measurements in
multimodal fusion. This approach only requires embedding vision features into
the language space, significantly reducing the number of trainable parameters
and accelerating both training and inference speeds. To enhance representation
learning in fusion module, we introduce an efficient multiscale feature
generation scheme that requires only a single forward pass through the vision
encoder. Moreover, we propose an adaptive fusion scheme that dynamically
discards less relevant visual information for each text token based on its
attention score. This ensures that the fusion process prioritizes the most
pertinent visual features. With experiments on various tasks including visual
question answering, image captioning, and instruction-following, we demonstrate
that our framework outperforms existing approaches. Specifically, our method
surpasses existing methods by an average accuracy of 0.77% on ScienceQA
dataset, with reduced training and inference latency, demonstrating the
superiority of our framework. The code is available at
https://github.com/Hao840/ADEM-VL.Summary
AI-Generated Summary