SEA:監督式嵌入對齊用於MLLM中的標記級視覺-文本整合
SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs
August 21, 2024
作者: Yuanyang Yin, Yaqi Zhao, Yajie Zhang, Ke Lin, Jiahao Wang, Xin Tao, Pengfei Wan, Di Zhang, Baoqun Yin, Wentao Zhang
cs.AI
摘要
最近,多模式大型語言模型(MLLMs)展示了卓越的知覺和推理能力,通常包括視覺編碼器、適配器和大型語言模型(LLM)。適配器在視覺和語言組件之間扮演關鍵橋樑的角色。然而,使用圖像級監督訓練適配器通常會導致顯著的不一致,損害了LLMs的能力並限制了多模式LLMs的潛力。為了解決這個問題,我們引入了監督嵌入對齊(SEA),這是一種利用視覺-語言預訓練模型(如CLIP)的標記級對齊方法,通過對比學習將視覺標記與LLM的嵌入空間對齊。這種方法確保了視覺和語言表示更一致地整合,增強了多模式LLMs的性能和可解釋性,同時保留了它們固有的能力。大量實驗表明,SEA有效地改善了MLLMs,特別是對於較小的模型,而無需添加額外數據或推理計算。SEA還為開發更通用和適應性解決方案以增強多模式系統奠定了基礎。
English
Multimodal Large Language Models (MLLMs) have recently demonstrated
remarkable perceptual and reasoning abilities, typically comprising a Vision
Encoder, an Adapter, and a Large Language Model (LLM). The adapter serves as
the critical bridge between the visual and language components. However,
training adapters with image-level supervision often results in significant
misalignment, undermining the LLMs' capabilities and limiting the potential of
Multimodal LLMs. To address this, we introduce Supervised Embedding Alignment
(SEA), a token-level alignment method that leverages vision-language
pre-trained models, such as CLIP, to align visual tokens with the LLM's
embedding space through contrastive learning. This approach ensures a more
coherent integration of visual and language representations, enhancing the
performance and interpretability of multimodal LLMs while preserving their
inherent capabilities. Extensive experiments show that SEA effectively improves
MLLMs, particularly for smaller models, without adding extra data or inference
computation. SEA also lays the groundwork for developing more general and
adaptable solutions to enhance multimodal systems.Summary
AI-Generated Summary