具有一致纹理的单次隐式可变人脸参数化
Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization
May 4, 2023
作者: Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis
cs.AI
摘要
目前对可访问创建可动和可定制的高质量3D头像的需求日益增长。虽然3D可变形模型提供了直观的控制以进行编辑和动画制作,并且对于单视图人脸重建具有鲁棒性,但它们无法轻松捕捉几何和外观细节。基于神经隐式表示的方法,如有符号距离函数(SDF)或神经辐射场,接近于照片逼真,但难以进行动画制作,并且在未见数据上泛化能力有限。为了解决这一问题,我们提出了一种新颖的方法,用于构建隐式3D可变形人脸模型,既具有泛化能力,又易于编辑。通过训练一系列高质量的3D扫描,我们的人脸模型由几何、表情和纹理潜在编码参数化,具有学习的SDF和显式UV纹理参数化。一旦训练完成,我们可以利用学习的先验知识,将图像投影到我们模型的潜在空间中,从而从单个野外图像重建头像。我们的隐式可变形人脸模型可用于从新视角渲染头像,通过修改表情编码来实现面部表情动画,并通过直接在学习的UV纹理地图上绘制来编辑纹理。我们定量和定性地证明,与最先进的方法相比,我们的方法在照片逼真度、几何和表情准确性方面有所提高。
English
There is a growing demand for the accessible creation of high-quality 3D
avatars that are animatable and customizable. Although 3D morphable models
provide intuitive control for editing and animation, and robustness for
single-view face reconstruction, they cannot easily capture geometric and
appearance details. Methods based on neural implicit representations, such as
signed distance functions (SDF) or neural radiance fields, approach
photo-realism, but are difficult to animate and do not generalize well to
unseen data. To tackle this problem, we propose a novel method for constructing
implicit 3D morphable face models that are both generalizable and intuitive for
editing. Trained from a collection of high-quality 3D scans, our face model is
parameterized by geometry, expression, and texture latent codes with a learned
SDF and explicit UV texture parameterization. Once trained, we can reconstruct
an avatar from a single in-the-wild image by leveraging the learned prior to
project the image into the latent space of our model. Our implicit morphable
face models can be used to render an avatar from novel views, animate facial
expressions by modifying expression codes, and edit textures by directly
painting on the learned UV-texture maps. We demonstrate quantitatively and
qualitatively that our method improves upon photo-realism, geometry, and
expression accuracy compared to state-of-the-art methods.