StyleAvatar3D:利用图像文本扩散模型进行高保真度3D头像生成
StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation
May 30, 2023
作者: Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang YU, Billzb Wang, Bin Fu, Tao Chen, Guosheng Lin, Chunhua Shen
cs.AI
摘要
最近图文扩散模型的进展刺激了对大规模3D生成模型的研究兴趣。然而,有限的多样化3D资源的可用性给学习带来了重大挑战。本文提出了一种新颖的方法,用于生成高质量、风格化的3D头像,该方法利用预训练的图文扩散模型进行数据生成,并利用基于生成对抗网络(GAN)的3D生成网络进行训练。我们的方法利用图文扩散模型提供的外观和几何学的全面先验知识来生成不同风格的头像的多视角图像。在数据生成过程中,我们利用从现有3D模型中提取的姿势来引导多视角图像的生成。为了解决数据中姿势与图像之间的不对齐问题,我们研究了视角特定的提示,并开发了一个粗到细的GAN训练鉴别器。我们还深入研究了与属性相关的提示,以增加生成头像的多样性。此外,我们在StyleGAN的风格空间内开发了一个潜在扩散模型,以便基于图像输入生成头像。我们的方法在视觉质量和生成头像的多样性方面表现出优越的性能,超过了当前最先进的方法。
English
The recent advancements in image-text diffusion models have stimulated
research interest in large-scale 3D generative models. Nevertheless, the
limited availability of diverse 3D resources presents significant challenges to
learning. In this paper, we present a novel method for generating high-quality,
stylized 3D avatars that utilizes pre-trained image-text diffusion models for
data generation and a Generative Adversarial Network (GAN)-based 3D generation
network for training. Our method leverages the comprehensive priors of
appearance and geometry offered by image-text diffusion models to generate
multi-view images of avatars in various styles. During data generation, we
employ poses extracted from existing 3D models to guide the generation of
multi-view images. To address the misalignment between poses and images in
data, we investigate view-specific prompts and develop a coarse-to-fine
discriminator for GAN training. We also delve into attribute-related prompts to
increase the diversity of the generated avatars. Additionally, we develop a
latent diffusion model within the style space of StyleGAN to enable the
generation of avatars based on image inputs. Our approach demonstrates superior
performance over current state-of-the-art methods in terms of visual quality
and diversity of the produced avatars.