ChatPaper.aiChatPaper

單次隱式可塑臉部與一致紋理參數化

Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization

May 4, 2023
作者: Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis
cs.AI

摘要

近年來,對於可存取創建高質量、可動和可定制的3D頭像的需求日益增長。儘管3D可變形模型提供了直觀的控制用於編輯和動畫製作,以及對於單視角人臉重建的穩健性,但它們無法輕易捕捉幾何和外觀細節。基於神經隱式表示的方法,如符號距離函數(SDF)或神經輻射場,接近於照片逼真,但很難進行動畫製作並且對於未見數據泛化效果不佳。為了應對這個問題,我們提出了一種新穎的方法,用於構建隱式3D可變形人臉模型,既具有泛化能力又易於編輯。通過從一系列高質量3D掃描中訓練,我們的人臉模型由幾何、表情和紋理潛在代碼參數化,並具有學習的SDF和明確的UV紋理參數化。一旦訓練完成,我們可以通過利用學習的先驗將圖像投影到我們模型的潛在空間中,從單張野外圖像重建一個頭像。我們的隱式可變形人臉模型可用於從新視角渲染頭像,通過修改表情代碼來動畫面部表情,並通過直接在學習的UV紋理地圖上繪製來編輯紋理。我們定量和定性地展示,相較於最先進的方法,我們的方法在照片逼真度、幾何和表情準確性方面有所改進。
English
There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data. To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing. Trained from a collection of high-quality 3D scans, our face model is parameterized by geometry, expression, and texture latent codes with a learned SDF and explicit UV texture parameterization. Once trained, we can reconstruct an avatar from a single in-the-wild image by leveraging the learned prior to project the image into the latent space of our model. Our implicit morphable face models can be used to render an avatar from novel views, animate facial expressions by modifying expression codes, and edit textures by directly painting on the learned UV-texture maps. We demonstrate quantitatively and qualitatively that our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
PDF50December 15, 2024