PERSE:基于单张肖像的个性化3D生成式虚拟人像
PERSE: Personalized 3D Generative Avatars from A Single Portrait
December 30, 2024
作者: Hyunsoo Cha, Inhee Lee, Hanbyul Joo
cs.AI
摘要
我们提出PERSE方法,用于根据参考肖像构建可动画化的个性化生成式虚拟形象。该虚拟形象模型能够在连续解耦的潜在空间中进行面部属性编辑,实现对每个面部属性的精准控制,同时保持个体的身份特征。为实现这一目标,我们的方法首先生成大规模合成二维视频数据集,每个视频在保持面部表情和视角连续变化的同时,结合原始输入中特定面部属性的变异。我们提出了一种创新流程来生成具有面部属性编辑功能的高质量、逼真二维视频。基于此合成属性数据集,我们提出基于3D高斯溅射的个性化虚拟形象创建方法,通过学习连续解耦的潜在空间实现直观的面部属性操控。为强化潜在空间中的平滑过渡,我们引入潜在空间正则化技术,通过插值生成的二维人脸作为监督信号。与现有方法相比,我们证明PERSE能在保持参考对象身份特征的同时,生成具有插值属性的高质量虚拟形象。
English
We present PERSE, a method for building an animatable personalized generative
avatar from a reference portrait. Our avatar model enables facial attribute
editing in a continuous and disentangled latent space to control each facial
attribute, while preserving the individual's identity. To achieve this, our
method begins by synthesizing large-scale synthetic 2D video datasets, where
each video contains consistent changes in the facial expression and viewpoint,
combined with a variation in a specific facial attribute from the original
input. We propose a novel pipeline to produce high-quality, photorealistic 2D
videos with facial attribute editing. Leveraging this synthetic attribute
dataset, we present a personalized avatar creation method based on the 3D
Gaussian Splatting, learning a continuous and disentangled latent space for
intuitive facial attribute manipulation. To enforce smooth transitions in this
latent space, we introduce a latent space regularization technique by using
interpolated 2D faces as supervision. Compared to previous approaches, we
demonstrate that PERSE generates high-quality avatars with interpolated
attributes while preserving identity of reference person.