ChatPaper.aiChatPaper

Sketch2NeRF:多視角素描引導的文本生成三維模型

Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation

January 25, 2024
作者: Minglin Chen, Longguang Wang, Weihao Yuan, Yukun Wang, Zhe Sheng, Yisheng He, Zilong Dong, Liefeng Bo, Yulan Guo
cs.AI

摘要

最近,文本轉3D方法已經實現了使用文本描述生成高保真度的3D內容。然而,生成的物件是隨機的且缺乏細粒度控制。素描提供了一種廉價的方法來引入這種細粒度控制。然而,由於素描的抽象性和模棱兩可性,要從這些素描中實現靈活的控制是具有挑戰性的。在本文中,我們提出了一個多視角素描引導的文本轉3D生成框架(即Sketch2NeRF),以將素描控制添加到3D生成中。具體來說,我們的方法利用預訓練的2D擴散模型(例如Stable Diffusion和ControlNet)來監督由神經輻射場(NeRF)表示的3D場景的優化。我們提出了一種新穎的同步生成和重建方法,以有效優化NeRF。在實驗中,我們收集了兩種類型的多視角素描數據集來評估所提出的方法。我們展示了我們的方法能夠合成具有細粒度素描控制的3D一致內容,同時對文本提示高保真。廣泛的結果顯示,我們的方法在素描相似性和文本對齊方面實現了最先進的性能。
English
Recently, text-to-3D approaches have achieved high-fidelity 3D content generation using text description. However, the generated objects are stochastic and lack fine-grained control. Sketches provide a cheap approach to introduce such fine-grained control. Nevertheless, it is challenging to achieve flexible control from these sketches due to their abstraction and ambiguity. In this paper, we present a multi-view sketch-guided text-to-3D generation framework (namely, Sketch2NeRF) to add sketch control to 3D generation. Specifically, our method leverages pretrained 2D diffusion models (e.g., Stable Diffusion and ControlNet) to supervise the optimization of a 3D scene represented by a neural radiance field (NeRF). We propose a novel synchronized generation and reconstruction method to effectively optimize the NeRF. In the experiments, we collected two kinds of multi-view sketch datasets to evaluate the proposed method. We demonstrate that our method can synthesize 3D consistent contents with fine-grained sketch control while being high-fidelity to text prompts. Extensive results show that our method achieves state-of-the-art performance in terms of sketch similarity and text alignment.
PDF121December 15, 2024