有用的狗狗机器人:使用四肢机器人和视觉-语言模型进行开放世界物体搬运
Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models
September 30, 2024
作者: Qi Wu, Zipeng Fu, Xuxin Cheng, Xiaolong Wang, Chelsea Finn
cs.AI
摘要
基于学习的方法在四足动物的运动中取得了强大的表现。然而,有几个挑战阻碍了四足动物学习需要与环境和人类进行交互的有用的室内技能:缺乏用于操作的末端执行器,仅使用模拟数据的有限语义理解能力,以及在室内环境中的低可穿透性和可达性。我们提出了一个用于室内环境中四足动物移动操作的系统。它使用一个前置夹持器进行物体操作,一个在模拟环境中使用自我中心深度训练的低层控制器,用于敏捷技能(如攀爬和全身倾斜),以及预先训练的视觉-语言模型(VLMs),其中包括第三人称鱼眼摄像头和自我中心RGB摄像头,用于语义理解和命令生成。我们在两个未知环境中评估了我们的系统,而没有进行任何真实世界数据收集或训练。我们的系统可以零次泛化到这些环境,并完成任务,例如遵循用户的命令,越过一张大号床后取回一个随机放置的玩具,成功率达到60%。项目网站:https://helpful-doggybot.github.io/
English
Learning-based methods have achieved strong performance for quadrupedal
locomotion. However, several challenges prevent quadrupeds from learning
helpful indoor skills that require interaction with environments and humans:
lack of end-effectors for manipulation, limited semantic understanding using
only simulation data, and low traversability and reachability in indoor
environments. We present a system for quadrupedal mobile manipulation in indoor
environments. It uses a front-mounted gripper for object manipulation, a
low-level controller trained in simulation using egocentric depth for agile
skills like climbing and whole-body tilting, and pre-trained vision-language
models (VLMs) with a third-person fisheye and an egocentric RGB camera for
semantic understanding and command generation. We evaluate our system in two
unseen environments without any real-world data collection or training. Our
system can zero-shot generalize to these environments and complete tasks, like
following user's commands to fetch a randomly placed stuff toy after climbing
over a queen-sized bed, with a 60% success rate. Project website:
https://helpful-doggybot.github.io/Summary
AI-Generated Summary