AlignVLM:連接視覺和語言潛在空間以實現多模態理解
AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding
February 3, 2025
作者: Ahmed Masry, Juan A. Rodriguez, Tianyu Zhang, Suyuchen Wang, Chao Wang, Aarash Feizi, Akshay Kalkunte Suresh, Abhay Puri, Xiangru Jian, Pierre-André Noël, Sathwik Tejaswi Madhusudhan, Marco Pedersoli, Bang Liu, Nicolas Chapados, Yoshua Bengio, Enamul Hoque, Christopher Pal, Issam H. Laradji, David Vazquez, Perouz Taslakian, Spandana Gella, Sai Rajeswar
cs.AI
摘要
在視覺語言模型(VLMs)中,將視覺特徵與語言嵌入進行對齊是一個關鍵挑戰。這類模型的表現取決於具有良好連接器,該連接器將由視覺編碼器生成的視覺特徵映射到與LLM共享的嵌入空間,同時保留語義相似性。現有的連接器,如多層感知器(MLPs),通常會產生超出分佈範圍或帶有噪音的輸入,導致模態之間的不對齊。在這項工作中,我們提出了一種新穎的視覺文本對齊方法AlignVLM,將視覺特徵映射到LLM文本嵌入的加權平均值。我們的方法利用LLM編碼的語言先驗,確保視覺特徵被映射到LLM能夠有效解釋的空間區域。AlignVLM對於文件理解任務特別有效,其中掃描的文件圖像必須準確映射到其文本內容。我們的廣泛實驗表明,與先前的對齊方法相比,AlignVLM實現了最先進的性能。我們進一步提供分析,展示了改善的視覺文本特徵對齊和對噪音的穩健性。
English
Aligning visual features with language embeddings is a key challenge in
vision-language models (VLMs). The performance of such models hinges on having
a good connector that maps visual features generated by a vision encoder to a
shared embedding space with the LLM while preserving semantic similarity.
Existing connectors, such as multilayer perceptrons (MLPs), often produce
out-of-distribution or noisy inputs, leading to misalignment between the
modalities. In this work, we propose a novel vision-text alignment method,
AlignVLM, that maps visual features to a weighted average of LLM text
embeddings. Our approach leverages the linguistic priors encoded by the LLM to
ensure that visual features are mapped to regions of the space that the LLM can
effectively interpret. AlignVLM is particularly effective for document
understanding tasks, where scanned document images must be accurately mapped to
their textual content. Our extensive experiments show that AlignVLM achieves
state-of-the-art performance compared to prior alignment methods. We provide
further analysis demonstrating improved vision-text feature alignment and
robustness to noise.Summary
AI-Generated Summary