ExtraNeRF:基于扩散模型的神经辐射场可见性感知视图外推
ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models
June 10, 2024
作者: Meng-Li Shih, Wei-Chiu Ma, Aleksander Holynski, Forrester Cole, Brian L. Curless, Janne Kontkanen
cs.AI
摘要
我们提出了ExtraNeRF,这是一种用于外推神经辐射场(NeRF)处理视图范围的新方法。我们的主要想法是利用NeRF来建模特定场景的细节,并利用扩散模型来对观测数据之外的情况进行外推。关键要素是跟踪可见性,以确定场景的哪些部分尚未被观察到,并专注于使用扩散模型一致地重建这些区域。我们的主要贡献包括一个基于可见性的扩散式修补模块,该模块在输入图像上进行微调,产生一个具有中等质量(通常模糊)修补区域的初始NeRF,然后通过第二个扩散模型对输入图像进行训练,以一致地增强修补图像,显著提高清晰度。我们展示了高质量的结果,能够在少量(通常为六个或更少)输入视图之外进行外推,有效地超越NeRF的输出,以及修补原始观察体积内新出现的未遮挡区域。我们在定量和定性上与相关工作进行了比较,并展示了相比先前技术的显著改进。
English
We propose ExtraNeRF, a novel method for extrapolating the range of views
handled by a Neural Radiance Field (NeRF). Our main idea is to leverage NeRFs
to model scene-specific, fine-grained details, while capitalizing on diffusion
models to extrapolate beyond our observed data. A key ingredient is to track
visibility to determine what portions of the scene have not been observed, and
focus on reconstructing those regions consistently with diffusion models. Our
primary contributions include a visibility-aware diffusion-based inpainting
module that is fine-tuned on the input imagery, yielding an initial NeRF with
moderate quality (often blurry) inpainted regions, followed by a second
diffusion model trained on the input imagery to consistently enhance, notably
sharpen, the inpainted imagery from the first pass. We demonstrate high-quality
results, extrapolating beyond a small number of (typically six or fewer) input
views, effectively outpainting the NeRF as well as inpainting newly disoccluded
regions inside the original viewing volume. We compare with related work both
quantitatively and qualitatively and show significant gains over prior art.Summary
AI-Generated Summary