NeRF-Casting:透過一致反射改進視角相依外觀
NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
May 23, 2024
作者: Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron
cs.AI
摘要
神經輻射場(NeRFs)通常難以重建和渲染高度反射的物體,其外觀隨著視角的變化迅速變化。最近的研究改善了NeRF渲染遠處環境光照的詳細反射外觀的能力,但無法合成較近內容的一致反射。此外,這些技術依賴於大型計算昂貴的神經網絡來建模輻射輸出,這嚴重限制了優化和渲染速度。我們提出了一種基於光線追蹤的方法來解決這些問題:我們的模型不是向每條攝像機光線上的點查詢昂貴的神經網絡以獲取出射視角相關輻射,而是從這些點投射反射光線,並通過NeRF表示跟踪它們以渲染特徵向量,然後使用一個小型廉價的網絡將其解碼為顏色。我們展示了我們的模型在合成包含有光澤物體的場景的視圖合成方面優於先前的方法,並且它是唯一一種現有的NeRF方法,可以在現實場景中合成逼真的反射外觀和反射,同時需要與當前最先進的視圖合成模型相當的優化時間。
English
Neural Radiance Fields (NeRFs) typically struggle to reconstruct and render
highly specular objects, whose appearance varies quickly with changes in
viewpoint. Recent works have improved NeRF's ability to render detailed
specular appearance of distant environment illumination, but are unable to
synthesize consistent reflections of closer content. Moreover, these techniques
rely on large computationally-expensive neural networks to model outgoing
radiance, which severely limits optimization and rendering speed. We address
these issues with an approach based on ray tracing: instead of querying an
expensive neural network for the outgoing view-dependent radiance at points
along each camera ray, our model casts reflection rays from these points and
traces them through the NeRF representation to render feature vectors which are
decoded into color using a small inexpensive network. We demonstrate that our
model outperforms prior methods for view synthesis of scenes containing shiny
objects, and that it is the only existing NeRF method that can synthesize
photorealistic specular appearance and reflections in real-world scenes, while
requiring comparable optimization time to current state-of-the-art view
synthesis models.