PhysAvatar:从视觉观察学习穿着3D化身的物理特性
PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
April 5, 2024
作者: Yang Zheng, Qingqing Zhao, Guandao Yang, Wang Yifan, Donglai Xiang, Florian Dubost, Dmitry Lagun, Thabo Beeler, Federico Tombari, Leonidas Guibas, Gordon Wetzstein
cs.AI
摘要
在许多应用中,建模和渲染逼真化身是至关重要的。然而,现有的从视觉观察中构建3D化身的方法往往难以重建穿着衣物的人类。我们引入了PhysAvatar,这是一个结合了逆渲染和逆物理的新颖框架,可以自动估计人类的形状和外观,以及他们衣物的物理参数,通过多视角视频数据。为此,我们采用了基于网格对齐的4D高斯技术进行时空网格跟踪,以及基于物理的逆渲染器来估计内在材料属性。PhysAvatar集成了一个物理模拟器,以原则性的方式使用基于梯度的优化来估计服装的物理参数。这些新颖的能力使PhysAvatar能够在训练数据中未见的运动和光照条件下,为穿着宽松衣物的化身创建高质量的新视角渲染。这标志着使用基于物理的逆渲染和物理学的数字人类建模迈出了重要的一步。我们的项目网站位于:https://qingqing-zhao.github.io/PhysAvatar
English
Modeling and rendering photorealistic avatars is of crucial importance in
many applications. Existing methods that build a 3D avatar from visual
observations, however, struggle to reconstruct clothed humans. We introduce
PhysAvatar, a novel framework that combines inverse rendering with inverse
physics to automatically estimate the shape and appearance of a human from
multi-view video data along with the physical parameters of the fabric of their
clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for
spatio-temporal mesh tracking as well as a physically based inverse renderer to
estimate the intrinsic material properties. PhysAvatar integrates a physics
simulator to estimate the physical parameters of the garments using
gradient-based optimization in a principled manner. These novel capabilities
enable PhysAvatar to create high-quality novel-view renderings of avatars
dressed in loose-fitting clothes under motions and lighting conditions not seen
in the training data. This marks a significant advancement towards modeling
photorealistic digital humans using physically based inverse rendering with
physics in the loop. Our project website is at:
https://qingqing-zhao.github.io/PhysAvatarSummary
AI-Generated Summary