ChatPaper.aiChatPaper

自主RT:具身基礎模型用於大規模協調機器人代理

AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents

January 23, 2024
作者: Michael Ahn, Debidatta Dwibedi, Chelsea Finn, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Karol Hausman, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Sean Kirmani, Isabel Leal, Edward Lee, Sergey Levine, Yao Lu, Isabel Leal, Sharath Maddineni, Kanishka Rao, Dorsa Sadigh, Pannag Sanketi, Pierre Sermanet, Quan Vuong, Stefan Welker, Fei Xia, Ted Xiao, Peng Xu, Steve Xu, Zhuo Xu
cs.AI

摘要

結合語言、視覺,以及近期的行動的基礎模型已經革新了利用網際網路規模數據來推理有用任務的能力。然而,訓練具體基礎模型的一個關鍵挑戰是缺乏根植於物理世界的數據。本文提出了AutoRT,一個利用現有基礎模型來擴大在完全未知情境中部署操作機器人的系統,並且只需最少人類監督。AutoRT利用視覺語言模型(VLMs)進行場景理解和根據,進一步利用大型語言模型(LLMs)提出多樣且新穎的指令,供一組機器人執行。通過利用基礎模型的知識指導數據收集,AutoRT能夠有效地推理自主權和安全性的折衷方案,同時大幅擴大機器人學習的數據收集。我們展示了AutoRT向超過20台機器人提出指令,跨越多棟建築物,並通過遠程操作和自主機器人策略收集了77k個真實機器人情節。我們實驗性地展示了AutoRT收集的這種「野外」數據明顯更加多樣化,而AutoRT使用LLMs允許機器人按照人類喜好來遵循指令。
English
Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world. In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots. Guiding data collection by tapping into the knowledge of foundation models enables AutoRT to effectively reason about autonomy tradeoffs and safety while significantly scaling up data collection for robot learning. We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies. We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
PDF122December 15, 2024