ChatPaper.aiChatPaper

BlendFields:少样本示例驱动的面部建模

BlendFields: Few-Shot Example-Driven Facial Modeling

May 12, 2023
作者: Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski
cs.AI

摘要

生成忠实的人脸可视化需要捕捉面部几何和外观的粗细级细节。现有方法要么是数据驱动的,需要大量数据语料库,这些数据对研究社区不公开,要么无法捕捉细节,因为它们依赖于几何面部模型,无法用网格离散化和线性变形来表示纹理的细节,这些模型只设计用于模拟粗糙的面部几何。我们引入了一种方法,通过借鉴传统计算机图形技术来弥合这一差距。未见过的表情是通过混合来自一组极端姿势的外观来建模的。这种混合是通过测量这些表情的局部体积变化并在测试时每当执行类似表情时在局部再现它们的外观来执行的。我们展示了我们的方法可以推广到未见过的表情,为面部的平滑体积变形增加了细粒度效果,并展示了它如何在超越面部的情况下推广。
English
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
PDF10December 15, 2024