ChatPaper.aiChatPaper

通用操作介面:野外機器人教學無需野外機器人

Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots

February 15, 2024
作者: Cheng Chi, Zhenjia Xu, Chuer Pan, Eric Cousineau, Benjamin Burchfiel, Siyuan Feng, Russ Tedrake, Shuran Song
cs.AI

摘要

我們提出了通用操作介面(UMI)- 一個資料收集和策略學習框架,允許將野外人類示範的技能直接轉移到可部署的機器人策略中。UMI採用手持夾爪結合精心設計的介面,以實現可攜式、低成本和信息豐富的數據收集,用於具有挑戰性的雙手和動態操作示範。為了促進可部署的策略學習,UMI整合了一個精心設計的策略介面,具有推斷時間匹配的延遲和相對軌跡動作表示。所得到的學習策略不受硬件限制,可部署在多個機器人平台上。憑藉這些功能,UMI框架開啟了新的機器人操作能力,實現了零槍擊通用的動態、雙手、精確和長視程行為,只需改變每個任務的訓練數據。我們通過全面的現實世界實驗展示了UMI的多功能性和有效性,通過在多樣的人類示範上進行訓練,UMI學習的策略可以零槍擊推廣到新的環境和物體。UMI的硬件和軟件系統在https://umi-gripper.github.io上開源。
English
We present Universal Manipulation Interface (UMI) -- a data collection and policy learning framework that allows direct skill transfer from in-the-wild human demonstrations to deployable robot policies. UMI employs hand-held grippers coupled with careful interface design to enable portable, low-cost, and information-rich data collection for challenging bimanual and dynamic manipulation demonstrations. To facilitate deployable policy learning, UMI incorporates a carefully designed policy interface with inference-time latency matching and a relative-trajectory action representation. The resulting learned policies are hardware-agnostic and deployable across multiple robot platforms. Equipped with these features, UMI framework unlocks new robot manipulation capabilities, allowing zero-shot generalizable dynamic, bimanual, precise, and long-horizon behaviors, by only changing the training data for each task. We demonstrate UMI's versatility and efficacy with comprehensive real-world experiments, where policies learned via UMI zero-shot generalize to novel environments and objects when trained on diverse human demonstrations. UMI's hardware and software system is open-sourced at https://umi-gripper.github.io.

Summary

AI-Generated Summary

PDF152December 15, 2024