ChatPaper.aiChatPaper

利用异构预训练Transformer扩展本体感视觉学习

Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers

September 30, 2024
作者: Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He
cs.AI

摘要

如今训练通用型机器人模型的一个障碍是异质性。以往的机器人学习方法通常收集数据,用于训练特定任务的特定实体,这种方式既昂贵又容易过拟合。本研究探讨了通过在不同实体和任务之间进行规模化的机器人数据异构预训练来学习策略表示的问题。我们提出了异构预训练变压器(HPT),它预先训练一个大型、可共享的策略神经网络主干,以学习任务和实体不可知的共享表示。这种通用架构将来自不同实体的特定本体感知和视觉输入对齐到一系列短令牌,然后处理这些令牌以将其映射到不同任务的控制机器人。利用最近的大规模多实体真实世界机器人数据集以及模拟、部署机器人和人类视频数据集,我们研究了跨异构性预训练策略。我们进行实验来研究训练目标的扩展行为,涵盖了52个数据集。HPT在多个模拟器基准测试和真实世界环境中,优于几种基线,并将未见任务的微调策略性能提高了超过20%。请查看项目网站(https://liruiw.github.io/hpt/)获取代码和视频。
English
One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation, deployed robots, and human video datasets, we investigate pre-training policies across heterogeneity. We conduct experiments to investigate the scaling behaviors of training objectives, to the extent of 52 datasets. HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings. See the project website (https://liruiw.github.io/hpt/) for code and videos.

Summary

AI-Generated Summary

PDF142November 13, 2024