ChatPaper.aiChatPaper

HeadCraft:為動畫3DMMs建模高細節形狀變化

HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs

December 21, 2023
作者: Artem Sevastopolsky, Philip-William Grassal, Simon Giebenhain, ShahRukh Athar, Luisa Verdoliva, Matthias Niessner
cs.AI

摘要

目前在人類頭部建模方面的進展使得可以通過神經表示生成看起來合理的3D頭部模型。然而,構建完整且高保真度的頭部模型,並實現明確控制的動畫仍然是一個問題。此外,基於部分觀察(例如從深度傳感器獲得)完成頭部幾何結構,同時保留細節,對於現有方法通常是有問題的。我們引入了一種生成模型,用於在一個包含關節的3DMM之上生成詳細的3D頭部網格,從而實現明確的動畫和高細節保留。我們的方法經過兩個階段的訓練。首先,我們將一個參數化頭部模型與最近引入的NPHM數據集中準確的3D頭部掃描的每個網格進行配准,並估計出的位移嵌入到手工製作的UV佈局中。其次,我們訓練一個StyleGAN模型,以便對位移的UV映射進行泛化。參數化模型的分解和高質量的頂點位移使我們能夠對模型進行動畫化並進行語義修改。我們展示了無條件生成的結果以及與完整或部分觀察的配合。項目頁面可在https://seva100.github.io/headcraft找到。
English
Current advances in human head modeling allow to generate plausible-looking 3D head models via neural representations. Nevertheless, constructing complete high-fidelity head models with explicitly controlled animation remains an issue. Furthermore, completing the head geometry based on a partial observation, e.g. coming from a depth sensor, while preserving details is often problematic for the existing methods. We introduce a generative model for detailed 3D head meshes on top of an articulated 3DMM which allows explicit animation and high-detail preservation at the same time. Our method is trained in two stages. First, we register a parametric head model with vertex displacements to each mesh of the recently introduced NPHM dataset of accurate 3D head scans. The estimated displacements are baked into a hand-crafted UV layout. Second, we train a StyleGAN model in order to generalize over the UV maps of displacements. The decomposition of the parametric model and high-quality vertex displacements allows us to animate the model and modify it semantically. We demonstrate the results of unconditional generation and fitting to the full or partial observation. The project page is available at https://seva100.github.io/headcraft.
PDF81December 15, 2024