ChatPaper.aiChatPaper

基于物理的运动重定向技术及其稀疏输入

Physics-based Motion Retargeting from Sparse Inputs

July 4, 2023
作者: Daniele Reda, Jungdam Won, Yuting Ye, Michiel van de Panne, Alexander Winkler
cs.AI

摘要

头像在虚拟世界中创建交互式和沉浸式体验中至关重要。在将这些角色动画化以模仿用户动作方面的一个挑战是,商用增强现实/虚拟现实产品仅包括头戴式显示器和控制器,提供非常有限的用户姿势传感器数据。另一个挑战是,头像可能具有不同于人类的骨骼结构,它们之间的映射并不清楚。在这项工作中,我们解决了这两个挑战。我们引入了一种方法,可以实时将稀疏的人体传感器数据中的动作重新定位到具有不同形态的角色身上。我们的方法使用强化学习来训练一个策略,以控制物理模拟器中的角色。我们只需要人体动作捕捉数据进行训练,而无需依赖为每个头像生成的动画。这使我们能够使用大型动作捕捉数据集来训练通用策略,以实时跟踪来自真实且稀疏数据的未知用户。我们在具有不同骨骼结构的三个角色上展示了我们方法的可行性:恐龙、类老鼠生物和人类。我们展示了头像姿势通常与用户非常匹配,尽管没有下半身传感器信息可用。我们讨论并剔除了我们框架中的重要组件,特别是运动再定位步骤、模仿、接触和动作奖励,以及我们的非对称演员-评论者观察。我们进一步探讨了我们的方法在各种设置中的稳健性,包括失衡、跳舞和体育动作。
English
Avatars are important to create interactive and immersive experiences in virtual worlds. One challenge in animating these characters to mimic a user's motion is that commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose. Another challenge is that an avatar might have a different skeleton structure than a human and the mapping between them is unclear. In this work we address both of these challenges. We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies. Our method uses reinforcement learning to train a policy to control characters in a physics simulator. We only require human motion capture data for training, without relying on artist-generated animations for each avatar. This allows us to use large motion capture datasets to train general policies that can track unseen users from real and sparse data in real-time. We demonstrate the feasibility of our approach on three characters with different skeleton structure: a dinosaur, a mouse-like creature and a human. We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available. We discuss and ablate the important components in our framework, specifically the kinematic retargeting step, the imitation, contact and action reward as well as our asymmetric actor-critic observations. We further explore the robustness of our method in a variety of settings including unbalancing, dancing and sports motions.
PDF70December 15, 2024