ChatPaper.aiChatPaper

HAAR:基於文本條件的三維基於髮絲的人類髮型生成模型

HAAR: Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles

December 18, 2023
作者: Vanessa Sklyarova, Egor Zakharov, Otmar Hilliges, Michael J. Black, Justus Thies
cs.AI

摘要

我們提出了 HAAR,一種新的基於髮絲的生成模型,用於3D人類髮型。具體來說,根據文本輸入,HAAR生成可用作現代計算機圖形引擎中的生產級資產的3D髮型。當前基於人工智慧的生成模型利用強大的2D先驗來重建以點雲、網格或體積函數形式呈現的3D內容。然而,通過使用2D先驗,它們固有地僅限於恢復視覺部分。高度遮擋的髮結構無法用這些方法重建,它們僅模擬“外殼”,這不適用於基於物理的渲染或模擬流程。相反,我們提出了一種首個文本引導的生成方法,使用3D髮絲作為基礎表示。利用2D視覺問答(VQA)系統,我們自動標註從一小組藝術家創作的髮型生成的合成髮型模型。這使我們能夠訓練在共同髮型UV空間中運作的潛在擴散模型。在定性和定量研究中,我們展示了所提出模型的能力並將其與現有髮型生成方法進行比較。
English
We present HAAR, a new strand-based generative model for 3D human hairstyles. Specifically, based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines. Current AI-based generative models take advantage of powerful 2D priors to reconstruct 3D content in the form of point clouds, meshes, or volumetric functions. However, by using the 2D priors, they are intrinsically limited to only recovering the visual parts. Highly occluded hair structures can not be reconstructed with those methods, and they only model the ''outer shell'', which is not ready to be used in physics-based rendering or simulation pipelines. In contrast, we propose a first text-guided generative method that uses 3D hair strands as an underlying representation. Leveraging 2D visual question-answering (VQA) systems, we automatically annotate synthetic hair models that are generated from a small set of artist-created hairstyles. This allows us to train a latent diffusion model that operates in a common hairstyle UV space. In qualitative and quantitative studies, we demonstrate the capabilities of the proposed model and compare it to existing hairstyle generation approaches.
PDF132December 15, 2024