AniPortraitGAN:從2D圖像集生成可動畫的3D肖像
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
September 5, 2023
作者: Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong
cs.AI
摘要
先前針對人類生成的可動畫3D感知生成對抗網絡主要集中在人類頭部或全身。然而,在現實生活中,僅有頭部的影片相對不常見,而全身生成通常無法控制面部表情,並且在生成高質量結果方面仍存在挑戰。為了應用於影片頭像,我們提出了一種可動畫3D感知生成對抗網絡,可生成具有可控面部表情、頭部姿勢和肩膀運動的肖像圖像。這是一種在未結構化2D圖像集合上訓練的生成模型,而無需使用3D或影片數據。對於新任務,我們基於生成輻射流形表示法,並配備了可學習的面部和頭肩變形。提出了雙攝像頭渲染和對抗學習方案,以提高生成面部的質量,這對於肖像圖像至關重要。開發了一個姿勢變形處理網絡,用於生成具有挑戰性區域(如長髮)的合理變形。實驗表明,我們的方法在未結構化2D圖像上訓練後,可以生成具有所需控制不同特性的多樣且高質量的3D肖像。
English
Previous animatable 3D-aware GANs for human generation have primarily focused
on either the human head or full body. However, head-only videos are relatively
uncommon in real life, and full body generation typically does not deal with
facial expression control and still has challenges in generating high-quality
results. Towards applicable video avatars, we present an animatable 3D-aware
GAN that generates portrait images with controllable facial expression, head
pose, and shoulder movements. It is a generative model trained on unstructured
2D image collections without using 3D or video data. For the new task, we base
our method on the generative radiance manifold representation and equip it with
learnable facial and head-shoulder deformations. A dual-camera rendering and
adversarial learning scheme is proposed to improve the quality of the generated
faces, which is critical for portrait images. A pose deformation processing
network is developed to generate plausible deformations for challenging regions
such as long hair. Experiments show that our method, trained on unstructured 2D
images, can generate diverse and high-quality 3D portraits with desired control
over different properties.