基于示例的动作合成通过生成式动作匹配
Example-based Motion Synthesis via Generative Motion Matching
June 1, 2023
作者: Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen
cs.AI
摘要
我们提出了GenMM,这是一个生成模型,可以从单个或少量示例序列中“挖掘”尽可能多样化的动作。与现有的数据驱动方法形成鲜明对比,后者通常需要长时间的离线训练,容易出现视觉伪影,并且往往无法处理大型和复杂的骨骼结构。GenMM继承了著名的运动匹配方法的无需训练特性和卓越的质量。GenMM可以在一秒钟内合成高质量的动作,即使是高度复杂和大型的骨骼结构也能胜任。我们的生成框架的核心是生成式运动匹配模块,它利用双向视觉相似性作为生成成本函数进行运动匹配,并在多阶段框架中通过示例运动匹配逐渐优化随机猜测。除了多样化的动作生成,我们通过将其扩展到一些仅通过运动匹配无法实现的场景,包括运动完成、关键帧引导生成、无限循环和运动重组,展示了我们生成框架的多功能性。本文的代码和数据可在https://wyysf-98.github.io/GenMM/找到。
English
We present GenMM, a generative model that "mines" as many diverse motions as
possible from a single or few example sequences. In stark contrast to existing
data-driven methods, which typically require long offline training time, are
prone to visual artifacts, and tend to fail on large and complex skeletons,
GenMM inherits the training-free nature and the superior quality of the
well-known Motion Matching method. GenMM can synthesize a high-quality motion
within a fraction of a second, even with highly complex and large skeletal
structures. At the heart of our generative framework lies the generative motion
matching module, which utilizes the bidirectional visual similarity as a
generative cost function to motion matching, and operates in a multi-stage
framework to progressively refine a random guess using exemplar motion matches.
In addition to diverse motion generation, we show the versatility of our
generative framework by extending it to a number of scenarios that are not
possible with motion matching alone, including motion completion, key
frame-guided generation, infinite looping, and motion reassembly. Code and data
for this paper are at https://wyysf-98.github.io/GenMM/