ChatPaper.aiChatPaper

視覺和語言模型是否共享概念?一項向量空間對齊研究

Do Vision and Language Models Share Concepts? A Vector Space Alignment Study

February 13, 2023
作者: Jiaang Li, Yova Kementchedjhieva, Constanza Fierro, Anders Søgaard
cs.AI

摘要

據說大規模預訓練語言模型(LMs)"缺乏將話語與世界連結的能力"(Bender和Koller,2020),因為它們沒有"世界的心智模型"(Mitchell和Krakauer,2023)。如果是這樣,人們會預期LM表示與視覺模型誘導的表示無關。我們在四個LM家族(BERT、GPT-2、OPT和LLaMA-2)和三種視覺模型架構(ResNet、SegFormer和MAE)之間進行實證評估。我們的實驗表明,LM部分趨向於收斂到與視覺模型同構的表示,受到分散性、多義性和頻率的影響。這對多模態處理和LM理解辯論都具有重要意義(Mitchell和Krakauer,2023)。
English
Large-scale pretrained language models (LMs) are said to ``lack the ability to connect utterances to the world'' (Bender and Koller, 2020), because they do not have ``mental models of the world' '(Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).

Summary

AI-Generated Summary

PDF93November 28, 2024