ChatPaper.aiChatPaper

DreamHuman:從文字創建可動的3D化身

DreamHuman: Animatable 3D Avatars from Text

June 15, 2023
作者: Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel Bazavan, Mihai Fieraru, Cristian Sminchisescu
cs.AI

摘要

我們提出了DreamHuman,一種僅從文字描述生成逼真可動的3D人類角色模型的方法。最近的文本轉3D方法在生成方面取得了相當大的進展,但在一些重要方面仍然存在不足。控制和空間分辨率通常受限,現有方法生成的是固定而非動畫的3D人類模型,而對於複雜結構如人類的人體測量一致性仍然是一個挑戰。DreamHuman將大型文本到圖像合成模型、神經輻射場和統計人體模型結合在一個新穎的建模和優化框架中。這使得能夠生成具有高質量紋理和學習的、特定實例的表面變形的動態3D人類角色。我們展示了我們的方法能夠從文本生成各種可動、逼真的3D人類模型。我們的3D模型具有多樣的外觀、服裝、膚色和身體形狀,並在視覺保真度方面明顯優於通用文本到3D方法和先前基於文本的3D角色生成器。欲了解更多結果和動畫,請查看我們的網站:https://dream-human.github.io。
English
We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions. Recent text-to-3D methods have made considerable strides in generation, but are still lacking in important aspects. Control and often spatial resolution remain limited, existing methods produce fixed rather than animated 3D human models, and anthropometric consistency for complex structures like people remains a challenge. DreamHuman connects large text-to-image synthesis models, neural radiance fields, and statistical human body models in a novel modeling and optimization framework. This makes it possible to generate dynamic 3D human avatars with high-quality textures and learned, instance-specific, surface deformations. We demonstrate that our method is capable to generate a wide variety of animatable, realistic 3D human models from text. Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity. For more results and animations please check our website at https://dream-human.github.io.
PDF162December 15, 2024