利用模拟人形机器人抓取各种物体
Grasping Diverse Objects with Simulated Humanoids
July 16, 2024
作者: Zhengyi Luo, Jinkun Cao, Sammy Christen, Alexander Winkler, Kris Kitani, Weipeng Xu
cs.AI
摘要
我们提出了一种控制模拟人形角色抓取物体并沿着物体轨迹移动的方法。由于控制具有灵巧手部的人形角色存在挑战,先前的方法通常使用无身体的手,仅考虑垂直抬升或短轨迹。这种有限的范围限制了它们在动画和模拟所需的物体操作方面的适用性。为了弥补这一差距,我们学习了一个控制器,可以拾取大量(>1200)的物体并将它们携带到沿着随机生成的轨迹移动。我们的关键见解是利用提供类人运动技能并显著加快训练速度的人形运动表示。仅使用简单的奖励、状态和物体表示,我们的方法在各种物体和轨迹上显示出良好的可扩展性。在训练过程中,我们不需要配对的全身运动和物体轨迹数据集。在测试时,我们仅需要物体网格和所需的抓取和运输轨迹。为了展示我们方法的能力,我们展示了在跟踪物体轨迹和泛化到未见过的物体方面的最新成功率。代码和模型将被发布。
English
We present a method for controlling a simulated humanoid to grasp an object
and move it to follow an object trajectory. Due to the challenges in
controlling a humanoid with dexterous hands, prior methods often use a
disembodied hand and only consider vertical lifts or short trajectories. This
limited scope hampers their applicability for object manipulation required for
animation and simulation. To close this gap, we learn a controller that can
pick up a large number (>1200) of objects and carry them to follow randomly
generated trajectories. Our key insight is to leverage a humanoid motion
representation that provides human-like motor skills and significantly speeds
up training. Using only simplistic reward, state, and object representations,
our method shows favorable scalability on diverse object and trajectories. For
training, we do not need dataset of paired full-body motion and object
trajectories. At test time, we only require the object mesh and desired
trajectories for grasping and transporting. To demonstrate the capabilities of
our method, we show state-of-the-art success rates in following object
trajectories and generalizing to unseen objects. Code and models will be
released.Summary
AI-Generated Summary