ChatPaper.aiChatPaper

GET-Zero:用于零样本具象化的图结构Transformer 泛化

GET-Zero: Graph Embodiment Transformer for Zero-shot Embodiment Generalization

July 20, 2024
作者: Austin Patel, Shuran Song
cs.AI

摘要

本文介绍了GET-Zero,这是一种模型架构和训练程序,用于学习一种能够立即适应新硬件变化而无需重新训练的感知感知控制策略。为此,我们提出了图感知变换器(GET),这是一种变换器模型,利用感知图连接作为注意机制中学习到的结构偏差。我们使用行为克隆将感知特定专家策略的演示数据提炼到一个能够根据机器人的硬件配置做出控制决策的感知感知GET模型中。我们对一个灵巧的手中物体旋转任务进行了案例研究,使用一个四指机器人手的不同配置,包括移除关节和延长链节长度。使用GET模型以及自建模损失使得GET-Zero能够零次通用到感知结构和链节长度的未见变化,相比基线方法提高了20%。所有代码和定性视频结果均可在https://get-zero-paper.github.io找到。
English
This paper introduces GET-Zero, a model architecture and training procedure for learning an embodiment-aware control policy that can immediately adapt to new hardware changes without retraining. To do so, we present Graph Embodiment Transformer (GET), a transformer model that leverages the embodiment graph connectivity as a learned structural bias in the attention mechanism. We use behavior cloning to distill demonstration data from embodiment-specific expert policies into an embodiment-aware GET model that conditions on the hardware configuration of the robot to make control decisions. We conduct a case study on a dexterous in-hand object rotation task using different configurations of a four-fingered robot hand with joints removed and with link length extensions. Using the GET model along with a self-modeling loss enables GET-Zero to zero-shot generalize to unseen variation in graph structure and link length, yielding a 20% improvement over baseline methods. All code and qualitative video results are on https://get-zero-paper.github.io

Summary

AI-Generated Summary

PDF42November 28, 2024