ChatPaper.aiChatPaper

DexArt:使用關節物件進行通用靈巧操作的基準測試

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects

May 9, 2023
作者: Chen Bao, Helin Xu, Yuzhe Qin, Xiaolong Wang
cs.AI

摘要

為了讓通用用途的機器人成為可能,我們需要讓機器人像人類一樣每天操作關節物體。目前機器人操作主要依賴於使用平行夾爪,這限制了機器人只能操作有限的一組物體。另一方面,使用多指機器人手操作將更好地模擬人類行為,並使機器人能夠操作各種關節物體。為此,我們提出了一個名為DexArt的新基準,其中涉及在物理模擬器中進行關節物體的熟練操作。在我們的基準中,我們定義了多個複雜的操作任務,機器人手將需要在每個任務中操作各種不同的關節物體。我們的主要重點是評估學習策略對未見過的關節物體的泛化能力。鑒於雙手和物體的高自由度,這是非常具有挑戰性的。我們使用強化學習與3D表示學習來實現泛化。通過廣泛的研究,我們提供了有關3D表示學習如何影響具有3D點雲輸入的強化學習中決策制定的新見解。更多詳細信息可在https://www.chenbao.tech/dexart/找到。
English
To enable general-purpose robots, we will require the robot to operate daily articulated objects as humans do. Current robot manipulation has heavily relied on using a parallel gripper, which restricts the robot to a limited set of objects. On the other hand, operating with a multi-finger robot hand will allow better approximation to human behavior and enable the robot to operate on diverse articulated objects. To this end, we propose a new benchmark called DexArt, which involves Dexterous manipulation with Articulated objects in a physical simulator. In our benchmark, we define multiple complex manipulation tasks, and the robot hand will need to manipulate diverse articulated objects within each task. Our main focus is to evaluate the generalizability of the learned policy on unseen articulated objects. This is very challenging given the high degrees of freedom of both hands and objects. We use Reinforcement Learning with 3D representation learning to achieve generalization. Through extensive studies, we provide new insights into how 3D representation learning affects decision making in RL with 3D point cloud inputs. More details can be found at https://www.chenbao.tech/dexart/.
PDF10December 15, 2024