ReconFusion:具有擴散先驗的3D重建
ReconFusion: 3D Reconstruction with Diffusion Priors
December 5, 2023
作者: Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
cs.AI
摘要
3D重建方法,如神經輻射場(Neural Radiance Fields,簡稱NeRFs),擅長渲染複雜場景的逼真新視角。然而,恢復高質量的NeRF通常需要數十到數百張輸入圖像,導致耗時的捕捉過程。我們提出ReconFusion,僅使用少量照片重建現實世界場景。我們的方法利用擴散先驗進行新視角合成,該先驗在合成和多視圖數據集上進行訓練,對NeRF為基礎的3D重建流程進行正則化,以處理超出輸入圖像集所捕捉的新視角。我們的方法在不受約束的區域合成逼真的幾何和紋理,同時保留觀察區域的外觀。我們在各種現實世界數據集上進行了廣泛評估,包括前向和360度場景,展示了相對於先前少視角NeRF重建方法的顯著性能改進。
English
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at
rendering photorealistic novel views of complex scenes. However, recovering a
high-quality NeRF typically requires tens to hundreds of input images,
resulting in a time-consuming capture process. We present ReconFusion to
reconstruct real-world scenes using only a few photos. Our approach leverages a
diffusion prior for novel view synthesis, trained on synthetic and multiview
datasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel
camera poses beyond those captured by the set of input images. Our method
synthesizes realistic geometry and texture in underconstrained regions while
preserving the appearance of observed regions. We perform an extensive
evaluation across various real-world datasets, including forward-facing and
360-degree scenes, demonstrating significant performance improvements over
previous few-view NeRF reconstruction approaches.