ChatPaper.aiChatPaper

混合場:少樣本示例驅動的面部建模

BlendFields: Few-Shot Example-Driven Facial Modeling

May 12, 2023
作者: Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski
cs.AI

摘要

生成忠實的人臉視覺化需要捕捉臉部幾何和外觀的粗細細節。現有方法要麼是數據驅動的,需要大量數據庫,這些數據對研究社區不公開,要麼無法捕捉細節,因為它們依賴於幾何臉部模型,無法用網格離散化和線性變形來表示細緻的紋理細節,這些模型僅設計用於建模粗略的臉部幾何。我們引入了一種方法,通過從傳統計算機圖形技術中汲取靈感來彌合這一差距。未見表情通過混合來自稀疏極端姿勢集的外觀來建模。這種混合是通過測量這些表情中的局部體積變化來執行的,並在測試時每當執行類似表情時在局部重現它們的外觀。我們展示了我們的方法對未見表情的泛化,將細緻效果添加到臉部的平滑體積變形之上,並展示了它如何對臉部之外的泛化。
English
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
PDF10December 15, 2024