視覺語言模型是否擁有內在世界模型?邁向原子級評估
Do Vision-Language Models Have Internal World Models? Towards an Atomic Evaluation
June 27, 2025
作者: Qiyue Gao, Xinyu Pi, Kevin Liu, Junrong Chen, Ruolan Yang, Xinqi Huang, Xinyu Fang, Lu Sun, Gautham Kishore, Bo Ai, Stone Tao, Mengyang Liu, Jiaxi Yang, Chao-Jung Lai, Chuanyang Jin, Jiannan Xiang, Benhao Huang, Zeming Chen, David Danks, Hao Su, Tianmin Shu, Ziqiao Ma, Lianhui Qin, Zhiting Hu
cs.AI
摘要
內在世界模型(World Models, WMs)使智能體能夠理解世界狀態並預測其轉變,作為高級審慎推理的基礎。近期的大型視覺-語言模型(Vision-Language Models, VLMs),如OpenAI的o3、GPT-4o和Gemini,展現了作為通用WMs的潛力。儘管最新研究已評估並揭示了這些模型在特定能力(如視覺理解)上的局限性,但對VLMs基本WM能力的系統性評估仍屬空白。借鑒比較心理學與認知科學,我們提出了一個兩階段框架,分別評估感知(視覺、空間、時間、數量與運動)與預測(機制模擬、傳遞推理、組合推理),以提供對VLMs作為WMs的原子級評估。在此框架指導下,我們引入了WM-ABench,這是一個大規模基準測試,涵蓋了6個多樣化模擬環境中的23個細粒度評估維度,並通過控制反事實模擬進行。通過對15個最新商業及開源VLMs的660次實驗,我們發現這些模型在基本世界建模能力上存在顯著局限。例如,在區分運動軌跡時,幾乎所有模型的準確率都接近隨機水平。此外,它們缺乏解耦理解——例如,某些模型傾向於認為藍色物體比綠色物體移動得更快。更豐富的結果與分析揭示了VLMs與人類水平世界建模之間的顯著差距。
English
Internal world models (WMs) enable agents to understand the world's state and
predict transitions, serving as the basis for advanced deliberative reasoning.
Recent large Vision-Language Models (VLMs), such as OpenAI o3, GPT-4o and
Gemini, exhibit potential as general-purpose WMs. While the latest studies have
evaluated and shown limitations in specific capabilities such as visual
understanding, a systematic evaluation of VLMs' fundamental WM abilities
remains absent. Drawing on comparative psychology and cognitive science, we
propose a two-stage framework that assesses Perception (visual, spatial,
temporal, quantitative, and motion) and Prediction (mechanistic simulation,
transitive inference, compositional inference) to provide an atomic evaluation
of VLMs as WMs. Guided by this framework, we introduce WM-ABench, a large-scale
benchmark comprising 23 fine-grained evaluation dimensions across 6 diverse
simulated environments with controlled counterfactual simulations. Through 660
experiments on 15 latest commercial and open-source VLMs, we find that these
models exhibit striking limitations in basic world modeling abilities. For
instance, almost all models perform at near-random accuracy when distinguishing
motion trajectories. Additionally, they lack disentangled understanding --
e.g., some models tend to believe blue objects move faster than green ones.
More rich results and analyses reveal significant gaps between VLMs and
human-level world modeling.