ChatPaper.aiChatPaper

ReconFusion:使用扩散先验进行3D重建

ReconFusion: 3D Reconstruction with Diffusion Priors

December 5, 2023
作者: Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
cs.AI

摘要

诸如神经辐射场(NeRFs)之类的三维重建方法擅长渲染复杂场景的逼真新视图。然而,恢复高质量的NeRF通常需要数十到数百张输入图像,导致耗时的捕捉过程。我们提出了ReconFusion,利用仅有少量照片重建现实场景。我们的方法利用扩散先验进行新视图合成,该先验在合成和多视角数据集上进行训练,对超出输入图像集所捕捉的新相机姿势的基于NeRF的三维重建流程进行正则化。我们的方法在不受约束的区域合成逼真的几何和纹理,同时保留观察区域的外观。我们在各种真实世界数据集上进行了广泛评估,包括前向和360度场景,展示了相较于先前的少视角NeRF重建方法的显著性能改进。
English
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes. However, recovering a high-quality NeRF typically requires tens to hundreds of input images, resulting in a time-consuming capture process. We present ReconFusion to reconstruct real-world scenes using only a few photos. Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel camera poses beyond those captured by the set of input images. Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions. We perform an extensive evaluation across various real-world datasets, including forward-facing and 360-degree scenes, demonstrating significant performance improvements over previous few-view NeRF reconstruction approaches.
PDF110December 15, 2024