StyleAvatar3D:利用影像文本擴散模型進行高保真度3D頭像生成
StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation
May 30, 2023
作者: Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang YU, Billzb Wang, Bin Fu, Tao Chen, Guosheng Lin, Chunhua Shen
cs.AI
摘要
最近在圖像-文本擴散模型方面的進展已激發了對大規模3D生成模型的研究興趣。然而,有限的多樣性3D資源的可用性對學習構成了重大挑戰。在本文中,我們提出了一種新穎的方法來生成高質量、風格化的3D頭像,該方法利用預先訓練的圖像-文本擴散模型進行數據生成,並利用基於生成對抗網絡(GAN)的3D生成網絡進行訓練。我們的方法利用圖像-文本擴散模型提供的外觀和幾何的全面先驗來生成不同風格的頭像的多視角圖像。在數據生成過程中,我們使用從現有3D模型中提取的姿勢來引導多視角圖像的生成。為了解決數據中姿勢和圖像之間的不對齊問題,我們研究了視角特定提示並開發了一個粗到細的GAN訓練判別器。我們還探討了與屬性相關的提示,以增加生成頭像的多樣性。此外,我們在StyleGAN的風格空間內開發了一個潛在擴散模型,以便基於圖像輸入生成頭像。我們的方法在視覺質量和生成頭像的多樣性方面表現優於當前最先進的方法。
English
The recent advancements in image-text diffusion models have stimulated
research interest in large-scale 3D generative models. Nevertheless, the
limited availability of diverse 3D resources presents significant challenges to
learning. In this paper, we present a novel method for generating high-quality,
stylized 3D avatars that utilizes pre-trained image-text diffusion models for
data generation and a Generative Adversarial Network (GAN)-based 3D generation
network for training. Our method leverages the comprehensive priors of
appearance and geometry offered by image-text diffusion models to generate
multi-view images of avatars in various styles. During data generation, we
employ poses extracted from existing 3D models to guide the generation of
multi-view images. To address the misalignment between poses and images in
data, we investigate view-specific prompts and develop a coarse-to-fine
discriminator for GAN training. We also delve into attribute-related prompts to
increase the diversity of the generated avatars. Additionally, we develop a
latent diffusion model within the style space of StyleGAN to enable the
generation of avatars based on image inputs. Our approach demonstrates superior
performance over current state-of-the-art methods in terms of visual quality
and diversity of the produced avatars.