PhysAvatar:從視覺觀察中學習穿著3D頭像的物理特性
PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
April 5, 2024
作者: Yang Zheng, Qingqing Zhao, Guandao Yang, Wang Yifan, Donglai Xiang, Florian Dubost, Dmitry Lagun, Thabo Beeler, Federico Tombari, Leonidas Guibas, Gordon Wetzstein
cs.AI
摘要
在許多應用中,建模和渲染逼真的頭像至關重要。然而,現有的從視覺觀察構建3D頭像的方法往往難以重建穿著衣物的人類。我們引入了PhysAvatar,這是一個結合逆渲染和逆物理的新框架,可以自動從多視角視頻數據中估計人類的形狀和外觀,以及他們衣服的物理參數。為此,我們採用了一種基於網格對齊的4D高斯技術進行時空網格跟踪,以及一個基於物理的逆渲染器來估計內在材料特性。PhysAvatar集成了一個物理模擬器,以一種合理的方式使用基於梯度的優化來估計服裝的物理參數。這些新穎的功能使PhysAvatar能夠在訓練數據中未見的運動和照明條件下,創建穿著寬鬆衣物的頭像的高質量新視圖渲染。這標誌著在使用基於物理的逆渲染和物理環境的建模逼真數字人類方面的重大進步。我們的項目網站位於:https://qingqing-zhao.github.io/PhysAvatar
English
Modeling and rendering photorealistic avatars is of crucial importance in
many applications. Existing methods that build a 3D avatar from visual
observations, however, struggle to reconstruct clothed humans. We introduce
PhysAvatar, a novel framework that combines inverse rendering with inverse
physics to automatically estimate the shape and appearance of a human from
multi-view video data along with the physical parameters of the fabric of their
clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for
spatio-temporal mesh tracking as well as a physically based inverse renderer to
estimate the intrinsic material properties. PhysAvatar integrates a physics
simulator to estimate the physical parameters of the garments using
gradient-based optimization in a principled manner. These novel capabilities
enable PhysAvatar to create high-quality novel-view renderings of avatars
dressed in loose-fitting clothes under motions and lighting conditions not seen
in the training data. This marks a significant advancement towards modeling
photorealistic digital humans using physically based inverse rendering with
physics in the loop. Our project website is at:
https://qingqing-zhao.github.io/PhysAvatarSummary
AI-Generated Summary