ChatPaper.aiChatPaper

無剛骨運動轉移的時間殘差雅可比矩陣

Temporal Residual Jacobians For Rig-free Motion Transfer

July 20, 2024
作者: Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Siddhartha Chaudhuri, Noam Aigerman, Vladimir Kim, Matthew Fisher, Niloy J. Mitra
cs.AI

摘要

我們引入時間殘差雅可比矩陣作為一種新穎的表示,以實現基於數據的運動轉移。我們的方法不假設訪問任何骨骼綁定或中間形狀關鍵幀,能夠生成幾何和時間上一致的運動,並可用於轉移長運動序列。我們方法的核心是兩個耦合的神經網絡,分別預測局部幾何和時間變化,然後將其集成,空間和時間上,以生成最終的動畫網格。這兩個網絡是聯合訓練的,彼此補充,在生成空間和時間信號方面,直接使用三維位置信息進行監督。在推理過程中,在沒有關鍵幀的情況下,我們的方法基本上解決了一個運動外推問題。我們在各種網格(合成和掃描形狀)上測試我們的設置,以展示其在未見身體形狀上生成逼真和自然外觀動畫方面優於SoTA替代方案。補充視頻和代碼可在https://temporaljacobians.github.io/ 上找到。
English
We introduce Temporal Residual Jacobians as a novel representation to enable data-driven motion transfer. Our approach does not assume access to any rigging or intermediate shape keyframes, produces geometrically and temporally consistent motions, and can be used to transfer long motion sequences. Central to our approach are two coupled neural networks that individually predict local geometric and temporal changes that are subsequently integrated, spatially and temporally, to produce the final animated meshes. The two networks are jointly trained, complement each other in producing spatial and temporal signals, and are supervised directly with 3D positional information. During inference, in the absence of keyframes, our method essentially solves a motion extrapolation problem. We test our setup on diverse meshes (synthetic and scanned shapes) to demonstrate its superiority in generating realistic and natural-looking animations on unseen body shapes against SoTA alternatives. Supplemental video and code are available at https://temporaljacobians.github.io/ .

Summary

AI-Generated Summary

PDF52November 28, 2024