ChatPaper.aiChatPaper

VR-NeRF:高保真虚拟可步行空间

VR-NeRF: High-Fidelity Virtualized Walkable Spaces

November 5, 2023
作者: Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt
cs.AI

摘要

我们提出了一个端到端系统,用于在虚拟现实中以高保真度捕捉、建模重建和实时渲染可步行空间,采用神经辐射场技术。为此,我们设计并构建了一个定制的多摄像机装置,以高保真度和多视角高动态范围图像密集捕捉可步行空间,质量和密度前所未有。我们通过引入一种新颖的感知色彩空间来扩展即时神经图形基元,用于学习准确的高动态范围外观,以及一种高效的mip-mapping机制,用于带有抗锯齿的细节级别渲染,同时仔细优化质量和速度之间的权衡。我们的多GPU渲染器能够以36 Hz在我们定制的演示机器上以双2K乘2K的全VR分辨率高保真度体积渲染我们的神经辐射场模型。我们展示了我们在具有挑战性的高保真度数据集上的结果质量,并将我们的方法和数据集与现有基准进行了比较。我们在项目网站上发布了我们的数据集。
English
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. We extend instant neural graphics primitives with a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing, while carefully optimizing the trade-off between quality and speed. Our multi-GPU renderer enables high-fidelity volume rendering of our neural radiance field model at the full VR resolution of dual 2Ktimes2K at 36 Hz on our custom demo machine. We demonstrate the quality of our results on our challenging high-fidelity datasets, and compare our method and datasets to existing baselines. We release our dataset on our project website.
PDF191December 15, 2024