VideoGameBench:视觉-语言模型能否通关热门视频游戏?
VideoGameBench: Can Vision-Language Models complete popular video games?
May 23, 2025
作者: Alex L. Zhang, Thomas L. Griffiths, Karthik R. Narasimhan, Ofir Press
cs.AI
摘要
视觉语言模型(VLMs)在编码和数学基准测试中取得了超越人类的优异表现,然而它们在执行人类天生擅长的任务——如感知、空间导航和记忆管理——方面的能力仍未被充分研究。真实的电子游戏设计旨在利用人类固有的归纳偏倚,使其易于学习和掌握,因此成为评估VLMs这些能力的理想测试平台。为此,我们推出了VideoGameBench,一个包含10款1990年代流行电子游戏的基准测试,VLMs需实时直接与这些游戏互动。VideoGameBench挑战模型仅凭原始视觉输入及对目标和控件的高层次描述来完成整个游戏,这与依赖游戏特定框架和辅助信息的现有设置形成显著区别。我们保留了三款游戏作为秘密测试,以鼓励模型开发出能泛化到未知环境的解决方案。实验表明,前沿视觉语言模型难以推进到每款游戏的初期阶段。我们发现推理延迟是前沿模型在实时环境中的主要限制因素;因此,我们引入了VideoGameBench Lite,在此设置中,游戏会在等待语言模型下一步动作时暂停。表现最佳的模型Gemini 2.5 Pro仅完成了VideoGameBench的0.48%和VideoGameBench Lite的1.6%。我们期望通过将上述人类技能形式化纳入此基准测试,能够推动这些研究方向的进展。
English
Vision-language models (VLMs) have achieved strong results on coding and math
benchmarks that are challenging for humans, yet their ability to perform tasks
that come naturally to humans--such as perception, spatial navigation, and
memory management--remains understudied. Real video games are crafted to be
intuitive for humans to learn and master by leveraging innate inductive biases,
making them an ideal testbed for evaluating such capabilities in VLMs. To this
end, we introduce VideoGameBench, a benchmark consisting of 10 popular video
games from the 1990s that VLMs directly interact with in real-time.
VideoGameBench challenges models to complete entire games with access to only
raw visual inputs and a high-level description of objectives and controls, a
significant departure from existing setups that rely on game-specific
scaffolding and auxiliary information. We keep three of the games secret to
encourage solutions that generalize to unseen environments. Our experiments
show that frontier vision-language models struggle to progress beyond the
beginning of each game. We find inference latency to be a major limitation of
frontier models in the real-time setting; therefore, we introduce
VideoGameBench Lite, a setting where the game pauses while waiting for the LM's
next action. The best performing model, Gemini 2.5 Pro, completes only 0.48% of
VideoGameBench and 1.6% of VideoGameBench Lite. We hope that the formalization
of the human skills mentioned above into this benchmark motivates progress in
these research directions.Summary
AI-Generated Summary