HeadCraft:为动画3DMMs建模高细节形状变化
HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs
December 21, 2023
作者: Artem Sevastopolsky, Philip-William Grassal, Simon Giebenhain, ShahRukh Athar, Luisa Verdoliva, Matthias Niessner
cs.AI
摘要
目前在人类头部建模方面的进展使得可以通过神经表示来生成看起来合理的3D头部模型。然而,构建完整的、具有明确控制动画的高保真头部模型仍然是一个问题。此外,基于部分观测(例如来自深度传感器的观测)来完成头部几何结构,同时保留细节,对于现有方法来说通常是有问题的。我们引入了一个生成模型,用于在一个关节化的3DMM之上生成详细的3D头部网格,这样可以同时实现显式动画和高细节保留。我们的方法经过两个阶段的训练。首先,我们将一个参数化头部模型与最近引入的NPHM数据集中准确的3D头部扫描的每个网格进行配准,得到估计的位移,并将这些位移烘烤到手工制作的UV布局中。其次,我们训练一个StyleGAN模型,以便对位移的UV映射进行泛化。参数化模型的分解和高质量的顶点位移使我们能够对模型进行动画化并在语义上进行修改。我们展示了无条件生成的结果,并将其拟合到完整或部分观测中。项目页面可在https://seva100.github.io/headcraft找到。
English
Current advances in human head modeling allow to generate plausible-looking
3D head models via neural representations. Nevertheless, constructing complete
high-fidelity head models with explicitly controlled animation remains an
issue. Furthermore, completing the head geometry based on a partial
observation, e.g. coming from a depth sensor, while preserving details is often
problematic for the existing methods. We introduce a generative model for
detailed 3D head meshes on top of an articulated 3DMM which allows explicit
animation and high-detail preservation at the same time. Our method is trained
in two stages. First, we register a parametric head model with vertex
displacements to each mesh of the recently introduced NPHM dataset of accurate
3D head scans. The estimated displacements are baked into a hand-crafted UV
layout. Second, we train a StyleGAN model in order to generalize over the UV
maps of displacements. The decomposition of the parametric model and
high-quality vertex displacements allows us to animate the model and modify it
semantically. We demonstrate the results of unconditional generation and
fitting to the full or partial observation. The project page is available at
https://seva100.github.io/headcraft.