DreamGaussian:用於高效3D內容創建的生成高斯飛濺

DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation

September 28, 2023
作者: Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, Gang Zeng
cs.AI

摘要

最近在3D內容創建方面的最新進展主要利用基於優化的3D生成,通過得分蒸餾採樣(SDS)。儘管展示了有希望的結果,但這些方法通常受到每個樣本優化速度緩慢的困擾,限制了它們的實際應用。在本文中,我們提出了DreamGaussian,一個新穎的3D內容生成框架,同時實現了效率和質量。我們的關鍵見解是設計一個具有伴隨網格提取和UV空間紋理細化的生成式3D高斯擴散模型。與神經輻射場中使用的佔用修剪相比,我們展示了對於3D生成任務,3D高斯逐漸密集化的進展收斂速度顯著更快。為了進一步提高紋理質量並促進下游應用,我們引入了一種將3D高斯轉換為帶紋理網格的高效算法,並應用微調階段來精細化細節。廣泛的實驗證明了我們提出方法的卓越效率和競爭性生成質量。值得注意的是,DreamGaussian僅需2分鐘從單視圖圖像中生成高質量的帶紋理網格,相較於現有方法實現了約10倍的加速。
English
Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.

Summary

AI-Generated Summary

PDF475December 15, 2024