GaussianDreamer:從文本快速生成到具有點雲先驗的3D高斯塗抹
GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors
October 12, 2023
作者: Taoran Yi, Jiemin Fang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, Xinggang Wang
cs.AI
摘要
近年來,從文本提示生成3D資產展現出令人印象深刻的成果。無論是2D還是3D擴散模型都能夠基於提示生成出不錯的3D物體。3D擴散模型具有良好的3D一致性,但由於可訓練的3D數據昂貴且難以獲得,其質量和泛化能力有所限制。2D擴散模型具有強大的泛化和精細生成能力,但難以保證3D一致性。本文嘗試通過最近的明確且高效的3D高斯飛灰表示法,將兩種類型的擴散模型的能力進行融合。提出了一個名為\name 的快速3D生成框架,其中3D擴散模型為初始化提供點雲先驗,而2D擴散模型則豐富了幾何和外觀。引入了噪點生長和顏色擾動操作以增強初始化的高斯模型。我們的\name 可以在一個GPU上在25分鐘內生成高質量的3D實例,比以往的方法快得多,同時生成的實例可以直接實時渲染。演示和代碼可在https://taoranyi.com/gaussiandreamer/找到。
English
In recent times, the generation of 3D assets from text prompts has shown
impressive results. Both 2D and 3D diffusion models can generate decent 3D
objects based on prompts. 3D diffusion models have good 3D consistency, but
their quality and generalization are limited as trainable 3D data is expensive
and hard to obtain. 2D diffusion models enjoy strong abilities of
generalization and fine generation, but the 3D consistency is hard to
guarantee. This paper attempts to bridge the power from the two types of
diffusion models via the recent explicit and efficient 3D Gaussian splatting
representation. A fast 3D generation framework, named as \name, is proposed,
where the 3D diffusion model provides point cloud priors for initialization and
the 2D diffusion model enriches the geometry and appearance. Operations of
noisy point growing and color perturbation are introduced to enhance the
initialized Gaussians. Our \name can generate a high-quality 3D instance within
25 minutes on one GPU, much faster than previous methods, while the generated
instances can be directly rendered in real time. Demos and code are available
at https://taoranyi.com/gaussiandreamer/.