RealmDreamer:基于文本的三维场景生成与修复和深度扩散
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
April 10, 2024
作者: Jaidev Shriram, Alex Trevithick, Lingjie Liu, Ravi Ramamoorthi
cs.AI
摘要
我们介绍了RealmDreamer,这是一种从文本描述生成通用前向3D场景的技术。我们的技术优化了3D高斯飘零表示,以匹配复杂的文本提示。我们通过利用最先进的文本到图像生成器初始化这些飘零,将它们提升到3D,并计算遮挡体积。然后,我们将这种表示优化到多个视图上,作为一个带图像条件扩散模型的3D修补任务。为了学习正确的几何结构,我们结合了一个深度扩散模型,通过对修补模型的样本进行条件化,提供丰富的几何结构。最后,我们使用来自图像生成器的锐化样本对模型进行微调。值得注意的是,我们的技术不需要视频或多视角数据,可以合成各种不同风格的高质量3D场景,包括多个物体。其通用性还允许从单个图像进行3D合成。
English
We introduce RealmDreamer, a technique for generation of general
forward-facing 3D scenes from text descriptions. Our technique optimizes a 3D
Gaussian Splatting representation to match complex text prompts. We initialize
these splats by utilizing the state-of-the-art text-to-image generators,
lifting their samples into 3D, and computing the occlusion volume. We then
optimize this representation across multiple views as a 3D inpainting task with
image-conditional diffusion models. To learn correct geometric structure, we
incorporate a depth diffusion model by conditioning on the samples from the
inpainting model, giving rich geometric structure. Finally, we finetune the
model using sharpened samples from image generators. Notably, our technique
does not require video or multi-view data and can synthesize a variety of
high-quality 3D scenes in different styles, consisting of multiple objects. Its
generality additionally allows 3D synthesis from a single image.Summary
AI-Generated Summary