ChatPaper.aiChatPaper

RVT-2:从少量演示中学习精准操控

RVT-2: Learning Precise Manipulation from Few Demonstrations

June 12, 2024
作者: Ankit Goyal, Valts Blukis, Jie Xu, Yijie Guo, Yu-Wei Chao, Dieter Fox
cs.AI

摘要

在这项工作中,我们研究如何构建一个机器人系统,可以根据语言指令解决多个3D操作任务。为了在工业和家庭领域有用,这样一个系统应该能够通过少量演示学习新任务并精确解决它们。之前的作品,如PerAct和RVT,已经研究了这个问题,然而,它们通常在需要高精度的任务上遇到困难。我们研究如何使它们更加有效、精确和快速。通过结构和系统级别的改进相结合,我们提出了RVT-2,一个多任务3D操作模型,训练速度比其前身RVT快6倍,推理速度快2倍。RVT-2在RLBench上取得了新的最先进水平,将成功率从65%提高到82%。RVT-2在现实世界中也很有效,它可以通过仅仅10次演示学习需要高精度的任务,比如拾取和插入插头。视觉结果、代码和训练模型可在以下网址找到:https://robotic-view-transformer-2.github.io/。
English
In this work, we study how to build a robotic system that can solve multiple 3D manipulation tasks given language instructions. To be useful in industrial and household domains, such a system should be capable of learning new tasks with few demonstrations and solving them precisely. Prior works, like PerAct and RVT, have studied this problem, however, they often struggle with tasks requiring high precision. We study how to make them more effective, precise, and fast. Using a combination of architectural and system-level improvements, we propose RVT-2, a multitask 3D manipulation model that is 6X faster in training and 2X faster in inference than its predecessor RVT. RVT-2 achieves a new state-of-the-art on RLBench, improving the success rate from 65% to 82%. RVT-2 is also effective in the real world, where it can learn tasks requiring high precision, like picking up and inserting plugs, with just 10 demonstrations. Visual results, code, and trained model are provided at: https://robotic-view-transformer-2.github.io/.

Summary

AI-Generated Summary

PDF71December 6, 2024