GET-Zero:用於零樣本具現化的圖形具現化轉換器
GET-Zero: Graph Embodiment Transformer for Zero-shot Embodiment Generalization
July 20, 2024
作者: Austin Patel, Shuran Song
cs.AI
摘要
本文介紹了GET-Zero,這是一種模型架構和訓練程序,用於學習一種意識體現感知的控制策略,可以立即適應新的硬體變化而無需重新訓練。為此,我們提出了圖形體現轉換器(GET),這是一種利用體現圖連通性作為注意機制中學習的結構偏差的變壓器模型。我們使用行為克隆,將體現特定專家策略的示範數據提煉到一個考慮機器人硬體配置的體現感知GET模型中,以做出控制決策。我們在一個靈巧的手內物體旋轉任務上進行了案例研究,使用四指機器人手的不同配置,包括刪除關節和增加連桿長度。使用GET模型以及自建模損失,使GET-Zero能夠對圖形結構和連桿長度的未見變化進行零樣本泛化,比基準方法提高了20%。所有代碼和定性視頻結果都在https://get-zero-paper.github.io 上。
English
This paper introduces GET-Zero, a model architecture and training procedure
for learning an embodiment-aware control policy that can immediately adapt to
new hardware changes without retraining. To do so, we present Graph Embodiment
Transformer (GET), a transformer model that leverages the embodiment graph
connectivity as a learned structural bias in the attention mechanism. We use
behavior cloning to distill demonstration data from embodiment-specific expert
policies into an embodiment-aware GET model that conditions on the hardware
configuration of the robot to make control decisions. We conduct a case study
on a dexterous in-hand object rotation task using different configurations of a
four-fingered robot hand with joints removed and with link length extensions.
Using the GET model along with a self-modeling loss enables GET-Zero to
zero-shot generalize to unseen variation in graph structure and link length,
yielding a 20% improvement over baseline methods. All code and qualitative
video results are on https://get-zero-paper.github.ioSummary
AI-Generated Summary