ChatPaper.aiChatPaper

ClotheDreamer:使用3D高斯函數生成器的文本導向服裝生成

ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians

June 24, 2024
作者: Yufei Liu, Junshu Tang, Chu Zheng, Shijie Zhang, Jinkun Hao, Junwei Zhu, Dongjin Huang
cs.AI

摘要

從文字生成高保真度的3D服裝對於數位化身創建來說是令人渴望但具有挑戰性的。最近基於得分蒸餾取樣(SDS)的擴散式方法開啟了新的可能性,但要麼與人體緊密結合,要麼難以重複使用。我們介紹了ClotheDreamer,一種基於3D高斯方法的生成可穿戴、可投入生產的3D服裝資產的方法,以文字提示為基礎。我們提出了一種新穎的表示方法,即解耦服裝高斯飛濺(DCGS),以實現分開優化。DCGS將穿著的數位化身表示為一個高斯模型,但凍結身體高斯飛濺。為了提高質量和完整性,我們結合了雙向SDS,分別監督穿著的數位化身和服裝RGBD渲染,並提出了一種針對寬鬆服裝的新修剪策略。我們的方法還可以支持自定義服裝模板作為輸入。由於我們的設計,合成的3D服裝可以輕鬆應用於虛擬試穿,並支持物理精確的動畫。大量實驗展示了我們方法優越且具有競爭力的性能。我們的項目頁面位於https://ggxxii.github.io/clothedreamer。
English
High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance. Our project page is at https://ggxxii.github.io/clothedreamer.

Summary

AI-Generated Summary

PDF71November 29, 2024