SpecNeRF:用於鏡面反射的高斯方向編碼
SpecNeRF: Gaussian Directional Encoding for Specular Reflections
December 20, 2023
作者: Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt
cs.AI
摘要
神經輻射場在建模3D場景外觀方面取得了顯著的表現。然而,現有方法仍然在處理具有光澤表面的視角相依外觀方面存在困難,特別是在室內環境的複雜照明下。與現有方法不同,通常假設遠程照明(如環境貼圖),我們提出了可學習的高斯方向編碼,以更好地模擬近場照明條件下的視角相依效應。重要的是,我們的新方向編碼捕捉了近場照明的空間變化特性,並模擬了預過濾環境貼圖的行為。因此,它能夠有效地評估在具有不同粗糙度係數的任何3D位置處的預卷積鏡面顏色。我們進一步引入了一種數據驅動的幾何先驗,有助於緩解反射建模中的形狀輻射歧義。我們展示了我們的高斯方向編碼和幾何先驗顯著改善了神經輻射場中具有挑戰性的鏡面反射建模,有助於將外觀分解為更具物理意義的組件。
English
Neural radiance fields have achieved remarkable performance in modeling the
appearance of 3D scenes. However, existing approaches still struggle with the
view-dependent appearance of glossy surfaces, especially under complex lighting
of indoor environments. Unlike existing methods, which typically assume distant
lighting like an environment map, we propose a learnable Gaussian directional
encoding to better model the view-dependent effects under near-field lighting
conditions. Importantly, our new directional encoding captures the
spatially-varying nature of near-field lighting and emulates the behavior of
prefiltered environment maps. As a result, it enables the efficient evaluation
of preconvolved specular color at any 3D location with varying roughness
coefficients. We further introduce a data-driven geometry prior that helps
alleviate the shape radiance ambiguity in reflection modeling. We show that our
Gaussian directional encoding and geometry prior significantly improve the
modeling of challenging specular reflections in neural radiance fields, which
helps decompose appearance into more physically meaningful components.