ChatPaper.aiChatPaper

具任務控制的複合動作學習

Composite Motion Learning with Task Control

May 5, 2023
作者: Pei Xu, Xiumin Shang, Victor Zordan, Ioannis Karamouzas
cs.AI

摘要

我們提出了一種深度學習方法,用於對物理模擬角色進行複合和任務驅動的運動控制。與現有使用強化學習來模仿全身運動的數據驅動方法不同,我們通過在類似GAN的設置中利用多個鑑別器,同時直接從多個參考運動中為特定身體部位學習解耦運動。在這個過程中,無需進行任何手動工作來生成用於學習的複合參考運動。相反,控制策略自行探索如何自動組合複合運動。我們進一步考慮多個任務特定的獎勵並訓練單一的多目標控制策略。為此,我們提出了一個新的多目標學習框架,自適應地平衡來自多個來源和多個目標導向控制目標的不同運動的學習。此外,由於複合運動通常是對更簡單行為的增強,我們引入了一種高效的方法來以增量方式訓練複合控制策略,其中我們將預先訓練的策略重複使用作為元策略,並訓練一個合作策略,使其適應新的複合任務。我們展示了我們的方法在各種具有挑戰性的多目標任務上的應用,包括複合運動模仿和多目標導向控制。
English
We present a deep learning method for composite and task-driven motion control for physically simulated characters. In contrast to existing data-driven approaches using reinforcement learning that imitate full-body motions, we learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup. In this process, there is no need of any manual work to produce composite reference motions for learning. Instead, the control policy explores by itself how the composite motions can be combined automatically. We further account for multiple task-specific rewards and train a single, multi-objective control policy. To this end, we propose a novel framework for multi-objective learning that adaptively balances the learning of disparate motions from multiple sources and multiple goal-directed control objectives. In addition, as composite motions are typically augmentations of simpler behaviors, we introduce a sample-efficient method for training composite control policies in an incremental manner, where we reuse a pre-trained policy as the meta policy and train a cooperative policy that adapts the meta one for new composite tasks. We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control.
PDF10December 15, 2024