AutoRT:大规模编排机器人代理的具身基础模型
AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents
January 23, 2024
作者: Michael Ahn, Debidatta Dwibedi, Chelsea Finn, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Karol Hausman, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Sean Kirmani, Isabel Leal, Edward Lee, Sergey Levine, Yao Lu, Isabel Leal, Sharath Maddineni, Kanishka Rao, Dorsa Sadigh, Pannag Sanketi, Pierre Sermanet, Quan Vuong, Stefan Welker, Fei Xia, Ted Xiao, Peng Xu, Steve Xu, Zhuo Xu
cs.AI
摘要
融合语言、视觉以及最近行动的基础模型已经彻底改变了利用互联网规模数据进行有用任务推理的能力。然而,训练具身体基础模型的一个关键挑战是缺乏基于物理世界的数据。本文提出了AutoRT,这是一个利用现有基础模型来扩大操作机器人在完全未知场景中部署的系统,且只需最少人类监督。AutoRT利用视觉语言模型(VLMs)进行场景理解和基础,进一步利用大型语言模型(LLMs)提出多样化和新颖的指令,供一群机器人执行。通过利用基础模型的知识指导数据收集,AutoRT能够有效推理自主权权衡和安全性,同时大幅扩大机器人学习的数据收集。我们展示了AutoRT向超过20台机器人提出指令,跨多栋建筑收集了77k个真实机器人情节,通过远程操作和自主机器人策略。我们通过实验证明,AutoRT收集的“野外”数据显著更加多样化,而AutoRT使用LLMs允许机器人按照人类偏好执行指令的数据收集。
English
Foundation models that incorporate language, vision, and more recently
actions have revolutionized the ability to harness internet scale data to
reason about useful tasks. However, one of the key challenges of training
embodied foundation models is the lack of data grounded in the physical world.
In this paper, we propose AutoRT, a system that leverages existing foundation
models to scale up the deployment of operational robots in completely unseen
scenarios with minimal human supervision. AutoRT leverages vision-language
models (VLMs) for scene understanding and grounding, and further uses large
language models (LLMs) for proposing diverse and novel instructions to be
performed by a fleet of robots. Guiding data collection by tapping into the
knowledge of foundation models enables AutoRT to effectively reason about
autonomy tradeoffs and safety while significantly scaling up data collection
for robot learning. We demonstrate AutoRT proposing instructions to over 20
robots across multiple buildings and collecting 77k real robot episodes via
both teleoperation and autonomous robot policies. We experimentally show that
such "in-the-wild" data collected by AutoRT is significantly more diverse, and
that AutoRT's use of LLMs allows for instruction following data collection
robots that can align to human preferences.