使用模擬人形機器人抓取各種物體
Grasping Diverse Objects with Simulated Humanoids
July 16, 2024
作者: Zhengyi Luo, Jinkun Cao, Sammy Christen, Alexander Winkler, Kris Kitani, Weipeng Xu
cs.AI
摘要
我們提出了一種控制模擬人形機器人抓取物體並按照物體軌跡移動的方法。由於控制具有靈巧手部的人形機器人存在挑戰,先前的方法通常使用無實體的手,並僅考慮垂直提升或短軌跡。這種有限的範圍限制了它們應用於動畫和模擬所需的物體操作。為了彌合這一差距,我們學習了一種控制器,可以抓取大量(>1200)的物體並將它們帶到按照隨機生成的軌跡移動。我們的關鍵見解是利用提供類人運動技能並顯著加快訓練速度的人形運動表示。僅使用簡單的獎勵、狀態和物體表示,我們的方法在各種物體和軌跡上表現出良好的可擴展性。在訓練時,我們不需要成對的全身運動和物體軌跡數據集。在測試時,我們只需要物體網格和所需的抓取和運輸軌跡。為了展示我們方法的能力,我們展示了在跟隨物體軌跡和對未見過的物體進行泛化方面的最新成功率。代碼和模型將會釋出。
English
We present a method for controlling a simulated humanoid to grasp an object
and move it to follow an object trajectory. Due to the challenges in
controlling a humanoid with dexterous hands, prior methods often use a
disembodied hand and only consider vertical lifts or short trajectories. This
limited scope hampers their applicability for object manipulation required for
animation and simulation. To close this gap, we learn a controller that can
pick up a large number (>1200) of objects and carry them to follow randomly
generated trajectories. Our key insight is to leverage a humanoid motion
representation that provides human-like motor skills and significantly speeds
up training. Using only simplistic reward, state, and object representations,
our method shows favorable scalability on diverse object and trajectories. For
training, we do not need dataset of paired full-body motion and object
trajectories. At test time, we only require the object mesh and desired
trajectories for grasping and transporting. To demonstrate the capabilities of
our method, we show state-of-the-art success rates in following object
trajectories and generalizing to unseen objects. Code and models will be
released.Summary
AI-Generated Summary