物理学为基础的灵巧吉他演奏中的双手同步
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing
September 25, 2024
作者: Pei Xu, Ruocheng Wang
cs.AI
摘要
我们提出了一种新颖的方法,用于在需要两只手协调控制的任务中,合成物理模拟手的灵巧运动,要求控制两只手之间具有高时间精度的协调。我们的方法不是直接学习控制两只手的联合策略,而是通过合作学习进行双手控制,其中每只手被视为一个独立的代理。首先分别训练每只手的个体策略,然后通过在集中环境中进行潜在空间操作来使它们同步,以作为双手控制的联合策略。通过这种方式,我们避免直接在两只手的联合状态-动作空间中进行策略学习,极大地提高了整体训练效率。我们在具有挑战性的吉他演奏任务中展示了我们提出的方法的有效性。通过我们的方法训练的虚拟吉他手可以从一般吉他演奏练习动作的非结构化参考数据中合成动作,并根据不存在于参考中的输入吉他谱准确演奏具有复杂和弦按和弦拨模式的多样节奏。除本文外,我们还提供了我们收集的用作策略训练参考的动作捕捉数据。代码可在以下网址获取:https://pei-xu.github.io/guitar。
English
We present a novel approach to synthesize dexterous motions for physically
simulated hands in tasks that require coordination between the control of two
hands with high temporal precision. Instead of directly learning a joint policy
to control two hands, our approach performs bimanual control through
cooperative learning where each hand is treated as an individual agent. The
individual policies for each hand are first trained separately, and then
synchronized through latent space manipulation in a centralized environment to
serve as a joint policy for two-hand control. By doing so, we avoid directly
performing policy learning in the joint state-action space of two hands with
higher dimensions, greatly improving the overall training efficiency. We
demonstrate the effectiveness of our proposed approach in the challenging
guitar-playing task. The virtual guitarist trained by our approach can
synthesize motions from unstructured reference data of general guitar-playing
practice motions, and accurately play diverse rhythms with complex chord
pressing and string picking patterns based on the input guitar tabs that do not
exist in the references. Along with this paper, we provide the motion capture
data that we collected as the reference for policy training. Code is available
at: https://pei-xu.github.io/guitar.Summary
AI-Generated Summary