InsActor:基于指令驱动的基于物理的角色
InsActor: Instruction-driven Physics-based Characters
December 28, 2023
作者: Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Xiao Ma, Liang Pan, Ziwei Liu
cs.AI
摘要
生成基于物理的角色动画并具有直观控制一直是一项令人向往且具有众多应用的任务。然而,生成能够反映高级人类指令的物理模拟动画仍然是一个困难的问题,这是由于物理环境的复杂性和人类语言的丰富性所致。在本文中,我们提出了InsActor,这是一个基于原则的生成框架,利用最近的扩散式人体运动模型的进展,以生成基于物理的角色的指令驱动动画。我们的框架使InsActor能够通过采用扩散策略进行灵活条件化的运动规划,从而捕捉高级人类指令与角色动作之间的复杂关系。为了克服计划运动中的无效状态和不可行状态转换,InsActor发现了低级技能,并将计划映射到紧凑的潜在技能序列空间中。大量实验表明,InsActor在各种任务上取得了最先进的结果,包括基于指令驱动的运动生成和基于指令驱动的航向点。值得注意的是,InsActor能够利用高级人类指令生成物理模拟动画,使其成为一种有价值的工具,特别适用于执行具有丰富指令集的长视程任务。
English
Generating animation of physics-based characters with intuitive control has
long been a desirable task with numerous applications. However, generating
physically simulated animations that reflect high-level human instructions
remains a difficult problem due to the complexity of physical environments and
the richness of human language. In this paper, we present InsActor, a
principled generative framework that leverages recent advancements in
diffusion-based human motion models to produce instruction-driven animations of
physics-based characters. Our framework empowers InsActor to capture complex
relationships between high-level human instructions and character motions by
employing diffusion policies for flexibly conditioned motion planning. To
overcome invalid states and infeasible state transitions in planned motions,
InsActor discovers low-level skills and maps plans to latent skill sequences in
a compact latent space. Extensive experiments demonstrate that InsActor
achieves state-of-the-art results on various tasks, including
instruction-driven motion generation and instruction-driven waypoint heading.
Notably, the ability of InsActor to generate physically simulated animations
using high-level human instructions makes it a valuable tool, particularly in
executing long-horizon tasks with a rich set of instructions.