基於範例的動作合成通過生成式動作匹配
Example-based Motion Synthesis via Generative Motion Matching
June 1, 2023
作者: Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen
cs.AI
摘要
我們提出了GenMM,一種生成模型,可以從單個或少量示例序列中「挖掘」盡可能多樣的動作。與現有的數據驅動方法形成鮮明對比,這些方法通常需要長時間的離線訓練,容易產生視覺異常,並且在大型和複雜骨架上容易失敗,GenMM繼承了無需訓練的特性,並具有優質的Motion Matching方法。GenMM可以在一秒內合成高質量的動作,即使是高度複雜和大型的骨架結構也能輕鬆應對。我們的生成框架的核心是生成式運動匹配模塊,它利用雙向視覺相似性作為生成成本函數來進行運動匹配,在多階段框架中逐步通過示例運動匹配來進行隨機猜測的改進。除了多樣的動作生成外,我們通過將其擴展到一些Motion Matching無法實現的場景,包括運動完成、關鍵幀引導生成、無限循環和運動重組,展示了我們生成框架的多功能性。本文的代碼和數據位於https://wyysf-98.github.io/GenMM/
English
We present GenMM, a generative model that "mines" as many diverse motions as
possible from a single or few example sequences. In stark contrast to existing
data-driven methods, which typically require long offline training time, are
prone to visual artifacts, and tend to fail on large and complex skeletons,
GenMM inherits the training-free nature and the superior quality of the
well-known Motion Matching method. GenMM can synthesize a high-quality motion
within a fraction of a second, even with highly complex and large skeletal
structures. At the heart of our generative framework lies the generative motion
matching module, which utilizes the bidirectional visual similarity as a
generative cost function to motion matching, and operates in a multi-stage
framework to progressively refine a random guess using exemplar motion matches.
In addition to diverse motion generation, we show the versatility of our
generative framework by extending it to a number of scenarios that are not
possible with motion matching alone, including motion completion, key
frame-guided generation, infinite looping, and motion reassembly. Code and data
for this paper are at https://wyysf-98.github.io/GenMM/