迷失于嵌入之中:视觉-语言模型中的信息损耗
Lost in Embeddings: Information Loss in Vision-Language Models
September 15, 2025
作者: Wenyan Li, Raphael Tang, Chengzu Li, Caiqi Zhang, Ivan Vulić, Anders Søgaard
cs.AI
摘要
视觉-语言模型(VLMs)通常通过预训练的视觉编码器处理视觉输入,随后通过连接器组件将其投影到语言模型的嵌入空间中。尽管这一投影步骤对于模态融合至关重要,但其可能引起的信息损失及其对模型能力的直接影响尚未得到充分研究。我们引入了两种互补的方法,通过分析潜在表示空间来检验和量化这种损失。首先,我们通过分析图像表示在投影前后k近邻关系的变化,评估语义信息的保留情况。其次,我们通过从投影后的表示中重建视觉嵌入,直接在图像块级别定位信息损失。实验表明,连接器显著扭曲了视觉表示的局部几何结构,投影后k近邻的差异达到40-60%,这与检索性能的下降相关。图像块级别的嵌入重建为模型在视觉问答任务中的行为提供了可解释的洞察,发现信息损失高的区域可靠地预测了模型表现不佳的实例。
English
Vision--language models (VLMs) often process visual inputs through a
pretrained vision encoder, followed by a projection into the language model's
embedding space via a connector component. While crucial for modality fusion,
the potential information loss induced by this projection step and its direct
impact on model capabilities remain understudied. We introduce two
complementary approaches to examine and quantify this loss by analyzing the
latent representation space. First, we evaluate semantic information
preservation by analyzing changes in k-nearest neighbor relationships between
image representations, before and after projection. Second, we directly measure
information loss by reconstructing visual embeddings from the projected
representation, localizing loss at an image patch level. Experiments reveal
that connectors substantially distort the local geometry of visual
representations, with k-nearest neighbors diverging by 40--60\%
post-projection, correlating with degradation in retrieval performance. The
patch-level embedding reconstruction provides interpretable insights for model
behavior on visually grounded question-answering tasks, finding that areas of
high information loss reliably predict instances where models struggle.