用於實時模擬化身的永續人形控制
Perpetual Humanoid Control for Real-time Simulated Avatars
May 10, 2023
作者: Zhengyi Luo, Jinkun Cao, Alexander Winkler, Kris Kitani, Weipeng Xu
cs.AI
摘要
我們提出了一種基於物理的人形控制器,能夠在存在噪聲輸入(例如從視頻估計的姿勢或從語言生成的姿勢)和意外跌倒的情況下實現高保真度的動作模仿和容錯行為。我們的控制器能夠擴展到學習一萬個運動片段,而無需使用任何外部穩定力,並學會自然地從失敗狀態中恢復。在核心層面上,我們提出了漸進式乘性控制策略(PMCP),動態分配新的網絡容量來學習越來越困難的運動序列。PMCP允許有效擴展以從大規模運動數據庫中學習,並添加新任務,例如失敗狀態恢復,而不會出現災難性遺忘。我們通過在現場和實時的多人頭像使用案例中使用它來模仿來自基於視頻姿勢估計器和基於語言運動生成器的噪聲姿勢,展示了我們控制器的有效性。
English
We present a physics-based humanoid controller that achieves high-fidelity
motion imitation and fault-tolerant behavior in the presence of noisy input
(e.g. pose estimates from video or generated from language) and unexpected
falls. Our controller scales up to learning ten thousand motion clips without
using any external stabilizing forces and learns to naturally recover from
fail-state. Given reference motion, our controller can perpetually control
simulated avatars without requiring resets. At its core, we propose the
progressive multiplicative control policy (PMCP), which dynamically allocates
new network capacity to learn harder and harder motion sequences. PMCP allows
efficient scaling for learning from large-scale motion databases and adding new
tasks, such as fail-state recovery, without catastrophic forgetting. We
demonstrate the effectiveness of our controller by using it to imitate noisy
poses from video-based pose estimators and language-based motion generators in
a live and real-time multi-person avatar use case.