FitMe:深度逼真的3D可塑模型化身
FitMe: Deep Photorealistic 3D Morphable Model Avatars
May 16, 2023
作者: Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Baris Gecer, Jiankang Deng, Stefanos Zafeiriou
cs.AI
摘要
本文介紹了FitMe,一個臉部反射模型和可微分渲染優化流程,可用於從單張或多張圖像中獲取高保真可渲染的人類頭像。該模型包括一個多模態風格生成器,以擷取臉部外觀的漫反射和鏡面反射,以及基於PCA的形狀模型。我們採用了一個快速可微分渲染過程,可應用於優化流程,同時實現了逼真的臉部著色。我們的優化過程通過利用基於風格的潛在表示和形狀模型的表達能力,準確捕捉了高細節的臉部反射和形狀。FitMe在單張“野外”臉部圖像上實現了最先進的反射獲取和身份保留,同時在提供多個與同一身份相關的無限制臉部圖像時,產生了令人印象深刻的掃描式結果。與最近的隱式頭像重建相比,FitMe僅需一分鐘,即可生成可重新照明的基於網格和紋理的頭像,可供最終用戶應用使用。
English
In this paper, we introduce FitMe, a facial reflectance model and a
differentiable rendering optimization pipeline, that can be used to acquire
high-fidelity renderable human avatars from single or multiple images. The
model consists of a multi-modal style-based generator, that captures facial
appearance in terms of diffuse and specular reflectance, and a PCA-based shape
model. We employ a fast differentiable rendering process that can be used in an
optimization pipeline, while also achieving photorealistic facial shading. Our
optimization process accurately captures both the facial reflectance and shape
in high-detail, by exploiting the expressivity of the style-based latent
representation and of our shape model. FitMe achieves state-of-the-art
reflectance acquisition and identity preservation on single "in-the-wild"
facial images, while it produces impressive scan-like results, when given
multiple unconstrained facial images pertaining to the same identity. In
contrast with recent implicit avatar reconstructions, FitMe requires only one
minute and produces relightable mesh and texture-based avatars, that can be
used by end-user applications.