用於高效神經輻射場渲染的適應性外殼
Adaptive Shells for Efficient Neural Radiance Field Rendering
November 16, 2023
作者: Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, Zan Gojcic
cs.AI
摘要
神經輻射場在新視角合成方面實現了前所未有的品質,但其體積形式仍然昂貴,需要大量樣本才能渲染高分辨率圖像。體積編碼對於表示模糊幾何,如樹葉和頭髮,至關重要,並且非常適合於隨機優化。然而,許多場景最終主要由固體表面組成,可以通過每像素單個樣本準確渲染。基於這一觀點,我們提出了一種神經輻射公式,可在體積和基於表面的渲染之間平滑過渡,大大加快渲染速度,甚至提高視覺保真度。我們的方法構建了一個明確的網格包絡,空間限制了神經體積表示。在固體區域,包絡幾乎收斂為一個表面,通常可以用單個樣本渲染。為此,我們通過一個學習的空間變化核大小概括了NeuS公式,該核大小編碼了密度的擴散,在體積樣式區域擬合一個寬核,並在表面樣式區域擬合一個緊核。然後,我們提取表面周圍的窄帶的明確網格,其寬度由核大小確定,並在此帶內微調輻射場。在推理時,我們對網格投射射線,僅在封閉區域內評估輻射場,大大減少所需的樣本數。實驗表明,我們的方法實現了高保真度的高效渲染。我們還展示了提取的包絡可以實現諸如動畫和模擬等下游應用。
English
Neural radiance fields achieve unprecedented quality for novel view
synthesis, but their volumetric formulation remains expensive, requiring a huge
number of samples to render high-resolution images. Volumetric encodings are
essential to represent fuzzy geometry such as foliage and hair, and they are
well-suited for stochastic optimization. Yet, many scenes ultimately consist
largely of solid surfaces which can be accurately rendered by a single sample
per pixel. Based on this insight, we propose a neural radiance formulation that
smoothly transitions between volumetric- and surface-based rendering, greatly
accelerating rendering speed and even improving visual fidelity. Our method
constructs an explicit mesh envelope which spatially bounds a neural volumetric
representation. In solid regions, the envelope nearly converges to a surface
and can often be rendered with a single sample. To this end, we generalize the
NeuS formulation with a learned spatially-varying kernel size which encodes the
spread of the density, fitting a wide kernel to volume-like regions and a tight
kernel to surface-like regions. We then extract an explicit mesh of a narrow
band around the surface, with width determined by the kernel size, and
fine-tune the radiance field within this band. At inference time, we cast rays
against the mesh and evaluate the radiance field only within the enclosed
region, greatly reducing the number of samples required. Experiments show that
our approach enables efficient rendering at very high fidelity. We also
demonstrate that the extracted envelope enables downstream applications such as
animation and simulation.