ProNeRF:学习高效的投影感知射线采样,用于细粒度隐式神经辐射场
ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields
December 13, 2023
作者: Juan Luis Gonzalez Bello, Minh-Quan Viet Bui, Munchurl Kim
cs.AI
摘要
最近神经渲染方面的进展表明,尽管速度较慢,隐式紧凑模型可以从多个视角学习场景的几何形状和视角相关外观。为了保持较小的内存占用,同时实现更快的推理时间,最近的研究采用了“采样器”网络,该网络可以自适应地沿着隐式神经辐射场中的每条射线对一小部分点进行采样。尽管这些方法在渲染时间上实现了高达10倍的减少,但与基本的神经辐射场相比,它们仍然存在相当大的质量下降。相比之下,我们提出了ProNeRF,它在内存占用(类似于NeRF)、速度(快于HyperReel)和质量(优于K-Planes)之间提供了最佳的折衷方案。ProNeRF配备了一种新颖的投影感知采样(PAS)网络,以及一种针对射线探索和利用的新训练策略,从而实现了高效的细粒度粒子采样。我们的ProNeRF在性能指标上达到了最先进水平,比NeRF快15-23倍,PSNR高0.65dB,并且比最佳的已发表基于采样器的方法HyperReel高0.95dB的PSNR。我们的探索和利用训练策略使ProNeRF能够学习完整场景的颜色和密度分布,同时学习针对最高密度区域的有效射线采样。我们提供了大量实验结果,证明了我们的方法在广泛采用的前向和360数据集LLFF和Blender上的有效性。
English
Recent advances in neural rendering have shown that, albeit slow, implicit
compact models can learn a scene's geometries and view-dependent appearances
from multiple views. To maintain such a small memory footprint but achieve
faster inference times, recent works have adopted `sampler' networks that
adaptively sample a small subset of points along each ray in the implicit
neural radiance fields. Although these methods achieve up to a 10times
reduction in rendering time, they still suffer from considerable quality
degradation compared to the vanilla NeRF. In contrast, we propose ProNeRF,
which provides an optimal trade-off between memory footprint (similar to NeRF),
speed (faster than HyperReel), and quality (better than K-Planes). ProNeRF is
equipped with a novel projection-aware sampling (PAS) network together with a
new training strategy for ray exploration and exploitation, allowing for
efficient fine-grained particle sampling. Our ProNeRF yields state-of-the-art
metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding
0.95dB higher PSNR than the best published sampler-based method, HyperReel. Our
exploration and exploitation training strategy allows ProNeRF to learn the full
scenes' color and density distributions while also learning efficient ray
sampling focused on the highest-density regions. We provide extensive
experimental results that support the effectiveness of our method on the widely
adopted forward-facing and 360 datasets, LLFF and Blender, respectively.