多空間神經輝度場
Multi-Space Neural Radiance Fields
May 7, 2023
作者: Ze-Xin Yin, Jiaxiong Qiu, Ming-Ming Cheng, Bo Ren
cs.AI
摘要
現有的神經輻射場(NeRF)方法存在反射物體的問題,通常導致渲染模糊或扭曲。我們提出了一種多空間神經輻射場(MS-NeRF)方法,不是計算單一輻射場,而是使用一組特徵場在平行子空間中表示場景,這有助於神經網絡更好地理解反射和折射物體的存在。我們的多空間方案作為現有NeRF方法的增強,僅需要少量計算開銷來訓練和推斷額外空間的輸出。我們使用三個代表性基於NeRF的模型,即NeRF、Mip-NeRF和Mip-NeRF 360,展示了我們方法的優越性和兼容性。在一個新建的數據集上進行比較,該數據集包含25個合成場景和7個具有複雜反射和折射的真實拍攝場景,所有場景都具有360度的觀察角度。大量實驗表明,我們的方法在渲染通過鏡面物體的複雜光路的高質量場景方面明顯優於現有的單一空間NeRF方法。我們的代碼和數據集將公開在https://zx-yin.github.io/msnerf。
English
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of
reflective objects, often resulting in blurry or distorted rendering. Instead
of calculating a single radiance field, we propose a multi-space neural
radiance field (MS-NeRF) that represents the scene using a group of feature
fields in parallel sub-spaces, which leads to a better understanding of the
neural network toward the existence of reflective and refractive objects. Our
multi-space scheme works as an enhancement to existing NeRF methods, with only
small computational overheads needed for training and inferring the extra-space
outputs. We demonstrate the superiority and compatibility of our approach using
three representative NeRF-based models, i.e., NeRF, Mip-NeRF, and Mip-NeRF 360.
Comparisons are performed on a novelly constructed dataset consisting of 25
synthetic scenes and 7 real captured scenes with complex reflection and
refraction, all having 360-degree viewpoints. Extensive experiments show that
our approach significantly outperforms the existing single-space NeRF methods
for rendering high-quality scenes concerned with complex light paths through
mirror-like objects. Our code and dataset will be publicly available at
https://zx-yin.github.io/msnerf.