ChatPaper.aiChatPaper

ClotheDreamer:使用3D高斯模型进行文本引导的服装生成

ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians

June 24, 2024
作者: Yufei Liu, Junshu Tang, Chu Zheng, Shijie Zhang, Jinkun Hao, Junwei Zhu, Dongjin Huang
cs.AI

摘要

从文本生成高保真度的3D服装对数字化角色创建非常理想,但也具有挑战性。最近基于扩散的方法通过得分蒸馏采样(SDS)已经开启了新的可能性,但要么与人体复杂地耦合,要么难以重复利用。我们介绍了ClotheDreamer,这是一种基于3D高斯的方法,可以从文本提示生成可穿戴的、适用于生产的3D服装资产。我们提出了一种新颖的表示方法,即解耦服装高斯飞溅(DCGS),以实现分开优化。DCGS将穿着的角色表示为一个高斯模型,但冻结了身体高斯飞溅。为了提高质量和完整性,我们结合了双向SDS,分别监督穿着的角色和服装的RGBD渲染,同时考虑姿势条件,并提出了一种新的松散服装修剪策略。我们的方法还可以支持自定义服装模板作为输入。由于我们的设计,合成的3D服装可以轻松应用于虚拟试穿,并支持物理精确的动画。大量实验证明了我们方法卓越且具有竞争力的性能。我们的项目页面位于https://ggxxii.github.io/clothedreamer。
English
High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance. Our project page is at https://ggxxii.github.io/clothedreamer.

Summary

AI-Generated Summary

PDF71November 29, 2024