GaussianDreamer:从文本快速生成到3D高斯飘带与点云先验
GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors
October 12, 2023
作者: Taoran Yi, Jiemin Fang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, Xinggang Wang
cs.AI
摘要
近年来,从文本提示生成3D资产展现出令人印象深刻的结果。无论是2D还是3D扩散模型都能够基于提示生成体面的3D物体。3D扩散模型具有良好的3D一致性,但由于可训练的3D数据昂贵且难以获取,其质量和泛化能力有限。2D扩散模型具有强大的泛化和精细生成能力,但难以保证3D一致性。本文试图通过最近的显式和高效的3D高斯飘带表示法,将两种类型的扩散模型的能力结合起来。提出了一种快速的3D生成框架,命名为\name,其中3D扩散模型为初始化提供点云先验,而2D扩散模型丰富了几何和外观。引入了噪声点生长和颜色扰动操作以增强初始化的高斯函数。我们的\name 可以在一个GPU上在25分钟内生成高质量的3D实例,比先前的方法快得多,同时生成的实例可以直接实时渲染。演示和代码可在https://taoranyi.com/gaussiandreamer/找到。
English
In recent times, the generation of 3D assets from text prompts has shown
impressive results. Both 2D and 3D diffusion models can generate decent 3D
objects based on prompts. 3D diffusion models have good 3D consistency, but
their quality and generalization are limited as trainable 3D data is expensive
and hard to obtain. 2D diffusion models enjoy strong abilities of
generalization and fine generation, but the 3D consistency is hard to
guarantee. This paper attempts to bridge the power from the two types of
diffusion models via the recent explicit and efficient 3D Gaussian splatting
representation. A fast 3D generation framework, named as \name, is proposed,
where the 3D diffusion model provides point cloud priors for initialization and
the 2D diffusion model enriches the geometry and appearance. Operations of
noisy point growing and color perturbation are introduced to enhance the
initialized Gaussians. Our \name can generate a high-quality 3D instance within
25 minutes on one GPU, much faster than previous methods, while the generated
instances can be directly rendered in real time. Demos and code are available
at https://taoranyi.com/gaussiandreamer/.