DreamGaussian:用于高效3D内容创建的生成高斯飞溅

DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation

September 28, 2023
作者: Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, Gang Zeng
cs.AI

摘要

最近在3D内容创建领域的最新进展主要利用基于优化的3D生成,通过得分蒸馏采样(SDS)。尽管展示了令人期待的结果,但这些方法通常受到每个样本优化速度缓慢的困扰,限制了它们的实际应用。在本文中,我们提出了DreamGaussian,这是一个新颖的3D内容生成框架,同时实现了效率和质量。我们的关键见解是设计一个生成式3D高斯飞溅模型,配备了UV空间中的伴随网格提取和纹理细化。与神经辐射场中使用的占用修剪相比,我们证明了3D高斯逐渐致密化对于3D生成任务的收敛速度显著更快。为了进一步提高纹理质量并促进下游应用,我们引入了一种将3D高斯转换为带纹理网格的高效算法,并应用了一个微调阶段来细化细节。大量实验证明了我们提出的方法具有卓越的效率和竞争力的生成质量。值得注意的是,DreamGaussian仅需2分钟从单视图图像中生成高质量带纹理的网格,与现有方法相比,加速约为10倍。
English
Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.

Summary

AI-Generated Summary

PDF475December 15, 2024