ChatPaper.aiChatPaper

智能玩耍:作為智能代理人的LLM基準測試

SmartPlay : A Benchmark for LLMs as Intelligent Agents

October 2, 2023
作者: Yue Wu, Xuan Tang, Tom M. Mitchell, Yuanzhi Li
cs.AI

摘要

近期大型語言模型(LLMs)展現出對智能代理和下一代自動化的巨大潛力,但目前缺乏一個系統性基準來評估LLMs作為代理的能力。我們介紹了SmartPlay:既是一個具有挑戰性的基準,也是一種評估LLMs作為代理的方法論。SmartPlay 包含 6 種不同的遊戲,包括猜拳、河內塔、Minecraft。每個遊戲都具有獨特的設定,提供高達 20 種評估設定和無限的環境變化。SmartPlay 中的每個遊戲都獨特挑戰智能LLM代理的 9 項重要能力子集,包括推理對象依賴、提前規劃、空間推理、從歷史中學習和理解隨機性。每個遊戲測試的能力子集之間的區別使我們能夠分析每項能力。SmartPlay 不僅作為評估LLM代理整體表現的嚴格測試場所,還作為識別當前方法論中存在差距的路線圖。我們在 github.com/LLMsmartplay/SmartPlay 上發布了我們的基準。
English
Recent large language models (LLMs) have demonstrated great potential toward intelligent agents and next-gen automation, but there currently lacks a systematic benchmark for evaluating LLMs' abilities as agents. We introduce SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs as agents. SmartPlay consists of 6 different games, including Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique setting, providing up to 20 evaluation settings and infinite environment variations. Each game in SmartPlay uniquely challenges a subset of 9 important capabilities of an intelligent LLM agent, including reasoning with object dependencies, planning ahead, spatial reasoning, learning from history, and understanding randomness. The distinction between the set of capabilities each game test allows us to analyze each capability separately. SmartPlay serves not only as a rigorous testing ground for evaluating the overall performance of LLM agents but also as a road-map for identifying gaps in current methodologies. We release our benchmark at github.com/LLMsmartplay/SmartPlay
PDF132December 15, 2024