給機器人一隻手:透過手眼協同的人類視頻示範學習具有通用性的操作
Giving Robots a Hand: Learning Generalizable Manipulation with Eye-in-Hand Human Video Demonstrations
July 12, 2023
作者: Moo Jin Kim, Jiajun Wu, Chelsea Finn
cs.AI
摘要
手持式攝影機在視覺導向機器人操作中展現了更高的樣本效率和泛化能力。然而,在機器人模仿方面,讓人類遠端操作者收集大量真實機器人專家示範仍然成本高昂。另一方面,收集人類執行任務的影片要便宜得多,因為這消除了對機器人遠端操作專業知識的需求,並且可以在各種情境中快速捕捉。因此,人類影片示範是一個有潛力的數據來源,可用於以大規模學習可泛化的機器人操作策略。在這項工作中,我們將狹窄的機器人模仿數據集與廣泛的未標記人類影片示範相結合,以大幅增強手持式視覺運動策略的泛化能力。儘管人類和機器人數據之間存在明顯的視覺領域差距,但我們的框架無需使用任何明確的領域適應方法,因為我們利用了手持式攝影機的部分可觀察性以及一個簡單的固定圖像遮罩方案。在涉及3自由度和6自由度機器人手臂控制的八個現實世界任務中,我們的方法將手持式操作策略的成功率平均提高了58%(絕對值),使機器人能夠泛化到機器人示範數據中未見的新環境配置和新任務。請參見影片結果:https://giving-robots-a-hand.github.io/。
English
Eye-in-hand cameras have shown promise in enabling greater sample efficiency
and generalization in vision-based robotic manipulation. However, for robotic
imitation, it is still expensive to have a human teleoperator collect large
amounts of expert demonstrations with a real robot. Videos of humans performing
tasks, on the other hand, are much cheaper to collect since they eliminate the
need for expertise in robotic teleoperation and can be quickly captured in a
wide range of scenarios. Therefore, human video demonstrations are a promising
data source for learning generalizable robotic manipulation policies at scale.
In this work, we augment narrow robotic imitation datasets with broad unlabeled
human video demonstrations to greatly enhance the generalization of eye-in-hand
visuomotor policies. Although a clear visual domain gap exists between human
and robot data, our framework does not need to employ any explicit domain
adaptation method, as we leverage the partial observability of eye-in-hand
cameras as well as a simple fixed image masking scheme. On a suite of eight
real-world tasks involving both 3-DoF and 6-DoF robot arm control, our method
improves the success rates of eye-in-hand manipulation policies by 58%
(absolute) on average, enabling robots to generalize to both new environment
configurations and new tasks that are unseen in the robot demonstration data.
See video results at https://giving-robots-a-hand.github.io/ .