ProNeRF:學習高效的投影感知射線採樣,用於細粒度隱式神經輻射場
ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields
December 13, 2023
作者: Juan Luis Gonzalez Bello, Minh-Quan Viet Bui, Munchurl Kim
cs.AI
摘要
最近神經渲染的進展表明,儘管速度較慢,隱式緊湊模型可以從多個視角學習場景的幾何和視角相依外觀。為了保持這樣小的記憶體佔用量但實現更快的推論時間,最近的研究採用了「取樣器」網絡,該網絡能夠自適應地在隱式神經輻射場中沿著每條射線取樣一小部分點。儘管這些方法實現了高達10倍的渲染時間減少,但與基本的 NeRF 相比,它們仍然存在相當大的質量降級。相反,我們提出 ProNeRF,它在記憶體佔用量(與 NeRF 相似)、速度(比 HyperReel 更快)和質量(優於 K-Planes)之間提供了最佳的折衷方案。ProNeRF 配備了一個新穎的投影感知取樣(PAS)網絡,以及一種新的射線探索和利用的訓練策略,實現了高效的細粒度粒子取樣。我們的 ProNeRF 在指標方面達到了最先進的水準,比 NeRF 快 15-23 倍,PSNR 比 NeRF 高 0.65dB,比最佳發表的基於取樣器的方法 HyperReel 高 0.95dB。我們的探索和利用訓練策略使 ProNeRF 能夠學習完整場景的顏色和密度分佈,同時學習針對最高密度區域的有效射線取樣。我們提供了大量的實驗結果,證明了我們的方法在廣泛採用的前向和 360 度數據集 LLFF 和 Blender 上的有效性。
English
Recent advances in neural rendering have shown that, albeit slow, implicit
compact models can learn a scene's geometries and view-dependent appearances
from multiple views. To maintain such a small memory footprint but achieve
faster inference times, recent works have adopted `sampler' networks that
adaptively sample a small subset of points along each ray in the implicit
neural radiance fields. Although these methods achieve up to a 10times
reduction in rendering time, they still suffer from considerable quality
degradation compared to the vanilla NeRF. In contrast, we propose ProNeRF,
which provides an optimal trade-off between memory footprint (similar to NeRF),
speed (faster than HyperReel), and quality (better than K-Planes). ProNeRF is
equipped with a novel projection-aware sampling (PAS) network together with a
new training strategy for ray exploration and exploitation, allowing for
efficient fine-grained particle sampling. Our ProNeRF yields state-of-the-art
metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding
0.95dB higher PSNR than the best published sampler-based method, HyperReel. Our
exploration and exploitation training strategy allows ProNeRF to learn the full
scenes' color and density distributions while also learning efficient ray
sampling focused on the highest-density regions. We provide extensive
experimental results that support the effectiveness of our method on the widely
adopted forward-facing and 360 datasets, LLFF and Blender, respectively.