VR-NeRF:高保真度的可步行虛擬空間
VR-NeRF: High-Fidelity Virtualized Walkable Spaces
November 5, 2023
作者: Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt
cs.AI
摘要
我們提出了一個端到端系統,用於在虛擬現實中使用神經輻射場高保真地捕捉、建模重建和實時渲染可步行空間。為此,我們設計並構建了一個自定義的多攝像機架,以高保真度密集捕捉可步行空間,並使用多視圖高動態範圍圖像以前所未有的質量和密度。我們通過引入一種新穎的知覺色彩空間來擴展即時神經圖形基元,以學習準確的高動態範圍外觀,並使用高效的mip-mapping機制進行細節層級渲染和抗鋸齒,同時仔細優化質量和速度之間的平衡。我們的多GPU渲染器能夠以36 Hz在我們的自定義演示機器上以雙2K x 2K的全VR分辨率高保真地渲染我們的神經輻射場模型。我們展示了我們在具有挑戰性的高保真度數據集上的結果質量,並將我們的方法和數據集與現有基準進行了比較。我們在項目網站上發布了我們的數據集。
English
We present an end-to-end system for the high-fidelity capture, model
reconstruction, and real-time rendering of walkable spaces in virtual reality
using neural radiance fields. To this end, we designed and built a custom
multi-camera rig to densely capture walkable spaces in high fidelity and with
multi-view high dynamic range images in unprecedented quality and density. We
extend instant neural graphics primitives with a novel perceptual color space
for learning accurate HDR appearance, and an efficient mip-mapping mechanism
for level-of-detail rendering with anti-aliasing, while carefully optimizing
the trade-off between quality and speed. Our multi-GPU renderer enables
high-fidelity volume rendering of our neural radiance field model at the full
VR resolution of dual 2Ktimes2K at 36 Hz on our custom demo machine. We
demonstrate the quality of our results on our challenging high-fidelity
datasets, and compare our method and datasets to existing baselines. We release
our dataset on our project website.