视觉和语言模型共享概念吗?一个向量空间对齐研究
Do Vision and Language Models Share Concepts? A Vector Space Alignment Study
February 13, 2023
作者: Jiaang Li, Yova Kementchedjhieva, Constanza Fierro, Anders Søgaard
cs.AI
摘要
据说大规模预训练语言模型(LMs)“缺乏将话语与世界联系起来的能力”(Bender和Koller,2020),因为它们没有“对世界的心智模型”(Mitchell和Krakauer,2023)。如果是这样,人们会期望LM表示与视觉模型诱导的表示无关。我们在四个LM系列(BERT、GPT-2、OPT和LLaMA-2)和三种视觉模型架构(ResNet、SegFormer和MAE)之间进行了实证评估。我们的实验表明,LMs在一定程度上趋向于与视觉模型的同构表示收敛,但受到离散性、多义性和频率的影响。这对多模态处理和LM理解辩论(Mitchell和Krakauer,2023)都具有重要意义。
English
Large-scale pretrained language models (LMs) are said to ``lack the ability
to connect utterances to the world'' (Bender and Koller, 2020), because they do
not have ``mental models of the world' '(Mitchell and Krakauer, 2023). If so,
one would expect LM representations to be unrelated to representations induced
by vision models. We present an empirical evaluation across four families of
LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures
(ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge
towards representations isomorphic to those of vision models, subject to
dispersion, polysemy and frequency. This has important implications for both
multi-modal processing and the LM understanding debate (Mitchell and Krakauer,
2023).Summary
AI-Generated Summary