AniPortraitGAN:从2D图像集生成可动画的3D肖像
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
September 5, 2023
作者: Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong
cs.AI
摘要
先前针对人类生成的可动画3D感知生成对抗网络主要集中在人类头部或全身。然而,在现实生活中,仅有头部的视频相对不常见,而全身生成通常无法控制面部表情,并且在生成高质量结果方面仍然存在挑战。为了实用的视频化身,我们提出了一种可动画3D感知生成对抗网络,它生成具有可控面部表情、头部姿势和肩部运动的肖像图像。这是一个在未结构化2D图像集合上训练的生成模型,而无需使用3D或视频数据。针对新任务,我们基于生成辐射流形表示法构建我们的方法,并配备了可学习的面部和头肩部变形。提出了双摄像头渲染和对抗学习方案,以提高生成面部的质量,这对肖像图像至关重要。开发了一个姿势变形处理网络,用于为长发等具有挑战性区域生成合理的变形。实验表明,我们的方法在未结构化2D图像上训练后,能够生成多样且高质量的具有所需控制不同属性的3D肖像。
English
Previous animatable 3D-aware GANs for human generation have primarily focused
on either the human head or full body. However, head-only videos are relatively
uncommon in real life, and full body generation typically does not deal with
facial expression control and still has challenges in generating high-quality
results. Towards applicable video avatars, we present an animatable 3D-aware
GAN that generates portrait images with controllable facial expression, head
pose, and shoulder movements. It is a generative model trained on unstructured
2D image collections without using 3D or video data. For the new task, we base
our method on the generative radiance manifold representation and equip it with
learnable facial and head-shoulder deformations. A dual-camera rendering and
adversarial learning scheme is proposed to improve the quality of the generated
faces, which is critical for portrait images. A pose deformation processing
network is developed to generate plausible deformations for challenging regions
such as long hair. Experiments show that our method, trained on unstructured 2D
images, can generate diverse and high-quality 3D portraits with desired control
over different properties.