Sketch2NeRF:多视角草图引导的文本到3D生成
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation
January 25, 2024
作者: Minglin Chen, Longguang Wang, Weihao Yuan, Yukun Wang, Zhe Sheng, Yisheng He, Zilong Dong, Liefeng Bo, Yulan Guo
cs.AI
摘要
最近,文本到3D方法已经实现了使用文本描述生成高保真度的3D内容。然而,生成的对象是随机的,缺乏细粒度控制。草图提供了一种廉价的方法来引入这种细粒度控制。然而,由于草图的抽象性和歧义性,要从这些草图中实现灵活控制是具有挑战性的。在本文中,我们提出了一个多视角草图引导的文本到3D生成框架(即Sketch2NeRF),以增加草图控制到3D生成中。具体来说,我们的方法利用预训练的2D扩散模型(例如,稳定扩散和控制网络)来监督由神经辐射场(NeRF)表示的3D场景的优化。我们提出了一种新颖的同步生成和重建方法,以有效优化NeRF。在实验中,我们收集了两种多视角草图数据集来评估所提出的方法。我们证明了我们的方法能够合成具有细粒度草图控制的3D一致内容,同时对文本提示高保真。广泛的结果显示,我们的方法在草图相似性和文本对齐方面实现了最先进的性能。
English
Recently, text-to-3D approaches have achieved high-fidelity 3D content
generation using text description. However, the generated objects are
stochastic and lack fine-grained control. Sketches provide a cheap approach to
introduce such fine-grained control. Nevertheless, it is challenging to achieve
flexible control from these sketches due to their abstraction and ambiguity. In
this paper, we present a multi-view sketch-guided text-to-3D generation
framework (namely, Sketch2NeRF) to add sketch control to 3D generation.
Specifically, our method leverages pretrained 2D diffusion models (e.g., Stable
Diffusion and ControlNet) to supervise the optimization of a 3D scene
represented by a neural radiance field (NeRF). We propose a novel synchronized
generation and reconstruction method to effectively optimize the NeRF. In the
experiments, we collected two kinds of multi-view sketch datasets to evaluate
the proposed method. We demonstrate that our method can synthesize 3D
consistent contents with fine-grained sketch control while being high-fidelity
to text prompts. Extensive results show that our method achieves
state-of-the-art performance in terms of sketch similarity and text alignment.