智能游戏:作为智能代理的LLM的基准测试
SmartPlay : A Benchmark for LLMs as Intelligent Agents
October 2, 2023
作者: Yue Wu, Xuan Tang, Tom M. Mitchell, Yuanzhi Li
cs.AI
摘要
最近的大型语言模型(LLMs)展示了对智能代理和下一代自动化的巨大潜力,但目前缺乏一个系统化的基准来评估LLMs作为代理的能力。我们介绍了SmartPlay:一个具有挑战性的基准和评估LLMs作为代理的方法论。SmartPlay包括6种不同的游戏,包括石头剪刀布、汉诺塔、Minecraft等。每个游戏都拥有独特的设置,提供高达20个评估设置和无限的环境变化。SmartPlay中的每个游戏都独特挑战智能LLM代理的9种重要能力子集,包括推理对象依赖关系、提前规划、空间推理、从历史中学习以及理解随机性。每个游戏测试的能力子集之间的区别使我们能够单独分析每种能力。SmartPlay不仅作为评估LLM代理整体性能的严格测试平台,还作为识别当前方法学中存在差距的路线图。我们在github.com/LLMsmartplay/SmartPlay发布了我们的基准。
English
Recent large language models (LLMs) have demonstrated great potential toward
intelligent agents and next-gen automation, but there currently lacks a
systematic benchmark for evaluating LLMs' abilities as agents. We introduce
SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs
as agents. SmartPlay consists of 6 different games, including
Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique
setting, providing up to 20 evaluation settings and infinite environment
variations. Each game in SmartPlay uniquely challenges a subset of 9 important
capabilities of an intelligent LLM agent, including reasoning with object
dependencies, planning ahead, spatial reasoning, learning from history, and
understanding randomness. The distinction between the set of capabilities each
game test allows us to analyze each capability separately. SmartPlay serves not
only as a rigorous testing ground for evaluating the overall performance of LLM
agents but also as a road-map for identifying gaps in current methodologies. We
release our benchmark at github.com/LLMsmartplay/SmartPlay