RP1M:一個大規模的鋼琴演奏運動數據集,具有雙手靈巧機器人手。
RP1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands
August 20, 2024
作者: Yi Zhao, Le Chen, Jan Schneider, Quankai Gao, Juho Kannala, Bernhard Schölkopf, Joni Pajarinen, Dieter Büchler
cs.AI
摘要
長久以來,賦予機器手人類級靈巧度一直是一個重要的研究目標。雙手機器人彈鋼琴是一項任務,結合了動態任務的挑戰,如生成快速而精確的動作,以及較慢但接觸豐富的操控問題。儘管基於強化學習的方法在單一任務表現方面取得了令人鼓舞的成果,但這些方法在多首歌曲的情況下仍然困難重重。我們的工作旨在彌合這一差距,從而實現規模化的機器人彈鋼琴模仿學習方法。為此,我們引入了機器人鋼琴100萬(RP1M)數據集,其中包含超過一百萬條雙手機器人彈鋼琴運動軌跡數據。我們將手指位置形成為最優運輸問題,從而實現對大量未標記歌曲的自動標註。通過對現有的模仿學習方法進行基準測試,顯示出這些方法通過利用RP1M實現了最先進的機器人彈鋼琴表現。
English
It has been a long-standing research goal to endow robot hands with
human-level dexterity. Bi-manual robot piano playing constitutes a task that
combines challenges from dynamic tasks, such as generating fast while precise
motions, with slower but contact-rich manipulation problems. Although
reinforcement learning based approaches have shown promising results in
single-task performance, these methods struggle in a multi-song setting. Our
work aims to close this gap and, thereby, enable imitation learning approaches
for robot piano playing at scale. To this end, we introduce the Robot Piano 1
Million (RP1M) dataset, containing bi-manual robot piano playing motion data of
more than one million trajectories. We formulate finger placements as an
optimal transport problem, thus, enabling automatic annotation of vast amounts
of unlabeled songs. Benchmarking existing imitation learning approaches shows
that such approaches reach state-of-the-art robot piano playing performance by
leveraging RP1M.Summary
AI-Generated Summary