ChatPaper.aiChatPaper

DreamHuman:从文本生成可动画的3D头像

DreamHuman: Animatable 3D Avatars from Text

June 15, 2023
作者: Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel Bazavan, Mihai Fieraru, Cristian Sminchisescu
cs.AI

摘要

我们提出了DreamHuman,这是一种仅通过文本描述生成逼真可动的3D人类化身模型的方法。最近的文本转3D方法在生成方面取得了相当大的进展,但在一些重要方面仍存在不足。控制和空间分辨率通常受限,现有方法生成的是固定而非可动的3D人类模型,而对于像人类这样的复杂结构,人体测量的一致性仍然是一个挑战。DreamHuman将大型文本到图像合成模型、神经辐射场和统计人体模型连接在一起,形成了一种新颖的建模和优化框架。这使得能够生成具有高质量纹理和学习的、实例特定的表面变形的动态3D人类化身。我们展示了我们的方法能够从文本生成各种可动、逼真的3D人类模型。我们的3D模型外观多样,服装、肤色和体形各异,并且在视觉保真度上明显优于通用文本到3D方法和先前基于文本的3D化身生成器。更多结果和动画请访问我们的网站:https://dream-human.github.io。
English
We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions. Recent text-to-3D methods have made considerable strides in generation, but are still lacking in important aspects. Control and often spatial resolution remain limited, existing methods produce fixed rather than animated 3D human models, and anthropometric consistency for complex structures like people remains a challenge. DreamHuman connects large text-to-image synthesis models, neural radiance fields, and statistical human body models in a novel modeling and optimization framework. This makes it possible to generate dynamic 3D human avatars with high-quality textures and learned, instance-specific, surface deformations. We demonstrate that our method is capable to generate a wide variety of animatable, realistic 3D human models from text. Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity. For more results and animations please check our website at https://dream-human.github.io.
PDF162December 15, 2024