ChatPaper.aiChatPaper

UniT:机器人学习的统一触觉表示

UniT: Unified Tactile Representation for Robot Learning

August 12, 2024
作者: Zhengtong Xu, Raghava Uppuluri, Xinwei Zhang, Cael Fitch, Philip Glen Crandall, Wan Shou, Dongyi Wang, Yu She
cs.AI

摘要

UniT是一种新颖的触觉表示学习方法,利用VQVAE学习紧凑的潜在空间,并用作触觉表示。它利用从单个简单对象获取的触觉图像来训练具有可转移性和泛化能力的表示。这种触觉表示可以零次迁移到各种下游任务,包括感知任务和操作策略学习。我们在手中的三维姿势估计任务上的基准测试显示,UniT优于现有的视觉和触觉表示学习方法。此外,UniT在涉及不同操作对象和复杂机器人-对象-环境交互的三个真实世界任务中的策略学习效果得到证明。通过大量实验,UniT被证明是一种简单易训练、即插即用、广泛有效的触觉表示学习方法。更多详细信息,请参阅我们的开源存储库https://github.com/ZhengtongXu/UniT和项目网站https://zhengtongxu.github.io/unifiedtactile.github.io/。
English
UniT is a novel approach to tactile representation learning, using VQVAE to learn a compact latent space and serve as the tactile representation. It uses tactile images obtained from a single simple object to train the representation with transferability and generalizability. This tactile representation can be zero-shot transferred to various downstream tasks, including perception tasks and manipulation policy learning. Our benchmarking on an in-hand 3D pose estimation task shows that UniT outperforms existing visual and tactile representation learning methods. Additionally, UniT's effectiveness in policy learning is demonstrated across three real-world tasks involving diverse manipulated objects and complex robot-object-environment interactions. Through extensive experimentation, UniT is shown to be a simple-to-train, plug-and-play, yet widely effective method for tactile representation learning. For more details, please refer to our open-source repository https://github.com/ZhengtongXu/UniT and the project website https://zhengtongxu.github.io/unifiedtactile.github.io/.

Summary

AI-Generated Summary

PDF102November 28, 2024