ChatPaper.aiChatPaper

ExtraNeRF:具有擴散模型的可見性感知視角外推的神經輻射場

ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models

June 10, 2024
作者: Meng-Li Shih, Wei-Chiu Ma, Aleksander Holynski, Forrester Cole, Brian L. Curless, Janne Kontkanen
cs.AI

摘要

我們提出了ExtraNeRF,一種用於擴展神經輻射場(NeRF)處理的視角範圍的新方法。我們的主要想法是利用NeRF來建模特定場景的細節,同時利用擴散模型來對我們觀察到的數據進行外推。一個關鍵要素是跟蹤能見度,以確定場景的哪些部分尚未被觀察到,並專注於使用擴散模型一致地重建這些區域。我們的主要貢獻包括一個基於能見度的擴散式修補模組,通過對輸入影像進行微調,產生一個具有中等質量(通常模糊)修補區域的初始NeRF,然後通過第二個擴散模型對輸入影像進行訓練,以一致地增強修補過的影像,顯著地使其更加清晰。我們展示了高質量的結果,可以在少量(通常為六個或更少)輸入視角之外進行外推,有效地超越NeRF的外部重建,並對原始觀看體積內的新解蔽區域進行修補。我們在定量和定性上與相關工作進行比較,並展示了相對於先前技術的顯著增益。
English
We propose ExtraNeRF, a novel method for extrapolating the range of views handled by a Neural Radiance Field (NeRF). Our main idea is to leverage NeRFs to model scene-specific, fine-grained details, while capitalizing on diffusion models to extrapolate beyond our observed data. A key ingredient is to track visibility to determine what portions of the scene have not been observed, and focus on reconstructing those regions consistently with diffusion models. Our primary contributions include a visibility-aware diffusion-based inpainting module that is fine-tuned on the input imagery, yielding an initial NeRF with moderate quality (often blurry) inpainted regions, followed by a second diffusion model trained on the input imagery to consistently enhance, notably sharpen, the inpainted imagery from the first pass. We demonstrate high-quality results, extrapolating beyond a small number of (typically six or fewer) input views, effectively outpainting the NeRF as well as inpainting newly disoccluded regions inside the original viewing volume. We compare with related work both quantitatively and qualitatively and show significant gains over prior art.

Summary

AI-Generated Summary

PDF120December 8, 2024