在許多模擬世界中擴展可指導的智能體
Scaling Instructable Agents Across Many Simulated Worlds
March 13, 2024
作者: SIMA Team, Maria Abi Raad, Arun Ahuja, Catarina Barros, Frederic Besse, Andrew Bolt, Adrian Bolton, Bethanie Brownfield, Gavin Buttimore, Max Cant, Sarah Chakera, Stephanie C. Y. Chan, Jeff Clune, Adrian Collister, Vikki Copeman, Alex Cullum, Ishita Dasgupta, Dario de Cesare, Julia Di Trapani, Yani Donchev, Emma Dunleavy, Martin Engelcke, Ryan Faulkner, Frankie Garcia, Charles Gbadamosi, Zhitao Gong, Lucy Gonzales, Karol Gregor, Arne Olav Hallingstad, Tim Harley, Sam Haves, Felix Hill, Ed Hirst, Drew A. Hudson, Steph Hughes-Fitt, Danilo J. Rezende, Mimi Jasarevic, Laura Kampis, Rosemary Ke, Thomas Keck, Junkyung Kim, Oscar Knagg, Kavya Kopparapu, Andrew Lampinen, Shane Legg, Alexander Lerchner, Marjorie Limont, Yulan Liu, Maria Loks-Thompson, Joseph Marino, Kathryn Martin Cussons, Loic Matthey, Siobhan Mcloughlin, Piermaria Mendolicchio, Hamza Merzic, Anna Mitenkova, Alexandre Moufarek, Valeria Oliveira, Yanko Oliveira, Hannah Openshaw, Renke Pan, Aneesh Pappu, Alex Platonov, Ollie Purkiss, David Reichert, John Reid, Pierre Harvey Richemond, Tyson Roberts, Giles Ruscoe, Jaume Sanchez Elias, Tasha Sandars, Daniel P. Sawyer, Tim Scholtes, Guy Simmons, Daniel Slater, Hubert Soyer, Heiko Strathmann, Peter Stys, Allison C. Tam, Denis Teplyashin, Tayfun Terzi, Davide Vercelli, Bojan Vujatovic, Marcus Wainwright, Jane X. Wang, Zhengdong Wang, Daan Wierstra, Duncan Williams, Nathaniel Wong, Sarah York, Nick Young
cs.AI
摘要
在創建通用人工智慧時,建立具有實體存在的人工智慧系統,能夠在任何3D環境中遵循任意語言指令,是一個重要挑戰。實現這一目標需要學習將語言基於感知和實體行動,以完成複雜任務。可擴展、可指導、多世界代理(SIMA)項目通過訓練代理程序來遵循各種虛擬3D環境中的自由形式指令,包括精心策劃的研究環境以及開放式、商業視頻遊戲。我們的目標是開發一個可指導的代理,能夠在任何模擬3D環境中完成人類可以做的任何事情。我們的方法專注於以語言驅動的通用性,同時施加最少的假設。我們的代理使用通用的、類似人類的界面與環境實時交互:輸入為圖像觀察和語言指令,輸出為鍵盤和滑鼠操作。這種通用方法具有挑戰性,但它使代理能夠在許多視覺上復雜且語義豐富的環境中基於語言,同時也使我們能夠輕鬆地在新環境中運行代理。在本文中,我們描述了我們的動機和目標,我們已經取得的初步進展,以及在幾個不同的研究環境和各種商業視頻遊戲中的有希望的初步結果。
English
Building embodied AI systems that can follow arbitrary language instructions
in any 3D environment is a key challenge for creating general AI. Accomplishing
this goal requires learning to ground language in perception and embodied
actions, in order to accomplish complex tasks. The Scalable, Instructable,
Multiworld Agent (SIMA) project tackles this by training agents to follow
free-form instructions across a diverse range of virtual 3D environments,
including curated research environments as well as open-ended, commercial video
games. Our goal is to develop an instructable agent that can accomplish
anything a human can do in any simulated 3D environment. Our approach focuses
on language-driven generality while imposing minimal assumptions. Our agents
interact with environments in real-time using a generic, human-like interface:
the inputs are image observations and language instructions and the outputs are
keyboard-and-mouse actions. This general approach is challenging, but it allows
agents to ground language across many visually complex and semantically rich
environments while also allowing us to readily run agents in new environments.
In this paper we describe our motivation and goal, the initial progress we have
made, and promising preliminary results on several diverse research
environments and a variety of commercial video games.Summary
AI-Generated Summary