VS-Bench:评估视觉语言模型在多智能体环境中的战略推理与决策能力
VS-Bench: Evaluating VLMs for Strategic Reasoning and Decision-Making in Multi-Agent Environments
June 3, 2025
作者: Zelai Xu, Zhexuan Xu, Xiangmin Yi, Huining Yuan, Xinlei Chen, Yi Wu, Chao Yu, Yu Wang
cs.AI
摘要
近期,视觉语言模型(VLMs)的进展已将其能力扩展至交互式代理任务,然而现有基准仍局限于单代理或纯文本环境。相比之下,现实场景往往涉及多个代理在丰富的视觉与语言环境中互动,这带来了多模态观察与策略交互的双重挑战。为弥合这一差距,我们推出了视觉策略基准(VS-Bench),这是一个多模态基准,用于评估VLMs在多代理环境中的策略推理与决策能力。VS-Bench包含八个基于视觉的环境,涵盖合作、竞争及混合动机的交互,旨在评估代理预测他人未来行动并优化长期目标的能力。我们考虑了两个互补的评估维度:通过下一动作预测准确率进行策略推理的离线评估,以及通过标准化回合回报进行决策的在线评估。对十四种领先VLMs的广泛实验显示,当前模型与最优性能之间存在显著差距,最佳模型的预测准确率为47.8%,标准化回报为24.3%。我们进一步深入分析了VLM代理的多模态观察、测试时扩展、社交行为及失败案例。通过标准化评估并凸显现有模型的局限,我们期望VS-Bench能成为未来策略多模态代理研究的基础。代码与数据可在https://vs-bench.github.io获取。
English
Recent advancements in Vision Language Models (VLMs) have expanded their
capabilities to interactive agent tasks, yet existing benchmarks remain limited
to single-agent or text-only environments. In contrast, real-world scenarios
often involve multiple agents interacting within rich visual and linguistic
contexts, posing challenges with both multimodal observations and strategic
interactions. To bridge this gap, we introduce Visual Strategic Bench
(VS-Bench), a multimodal benchmark that evaluates VLMs for strategic reasoning
and decision-making in multi-agent environments. VS-Bench comprises eight
vision-grounded environments spanning cooperative, competitive, and
mixed-motive interactions, designed to assess agents' ability to predict
others' future moves and optimize for long-term objectives. We consider two
complementary evaluation dimensions, including offline evaluation of strategic
reasoning by next-action prediction accuracy and online evaluation of
decision-making by normalized episode return. Extensive experiments of fourteen
leading VLMs reveal a significant gap between current models and optimal
performance, with the best models attaining 47.8% prediction accuracy and 24.3%
normalized return. We further conduct in-depth analyses on multimodal
observations, test-time scaling, social behaviors, and failure cases of VLM
agents. By standardizing the evaluation and highlighting the limitations of
existing models, we envision VS-Bench as a foundation for future research on
strategic multimodal agents. Code and data are available at
https://vs-bench.github.io.Summary
AI-Generated Summary