ChatPaper.aiChatPaper

HybridNeRF:通過適應性體積表面實現高效神經渲染

HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces

December 5, 2023
作者: Haithem Turki, Vasu Agrawal, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Deva Ramanan, Michael Zollhöfer, Christian Richardt
cs.AI

摘要

神經輻射場提供了最先進的視圖合成品質,但渲染速度較慢。其中一個原因是它們使用體積渲染,因此在渲染時需要每條射線許多樣本(和模型查詢)。儘管這種表示方法靈活且易於優化,但大多數現實世界的物體可以更有效地用表面而非體積來建模,因此每條射線需要的樣本數量要少得多。這一觀察結果激發了表面表示法(如符號距離函數)等方面的相當大進展,但這些方法可能難以模擬半透明和薄結構。我們提出了一種方法,HybridNeRF,充分利用了這兩種表示法的優勢,將大多數物體呈現為表面,同時對(通常)少量具挑戰性的區域進行體積建模。我們對HybridNeRF進行了評估,包括挑戰性的Eyeful Tower數據集以及其他常用的視圖合成數據集。與最先進的基線方法進行比較,包括最近的光柵化方法,我們將錯誤率提高了15-30%,同時實現了虛擬現實分辨率(2Kx2K)的實時幀率(至少36 FPS)。
English
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering, thus requiring many samples (and model queries) per ray at render time. Although this representation is flexible and easy to optimize, most real-world objects can be modeled more efficiently with surfaces instead of volumes, requiring far fewer samples per ray. This observation has spurred considerable progress in surface representations such as signed distance functions, but these may struggle to model semi-opaque and thin structures. We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces while modeling the (typically) small fraction of challenging regions volumetrically. We evaluate HybridNeRF against the challenging Eyeful Tower dataset along with other commonly used view synthesis datasets. When comparing to state-of-the-art baselines, including recent rasterization-based approaches, we improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K).
PDF70December 15, 2024