ChatPaper.aiChatPaper

HybridNeRF:通过自适应体积表面实现高效神经渲染

HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces

December 5, 2023
作者: Haithem Turki, Vasu Agrawal, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Deva Ramanan, Michael Zollhöfer, Christian Richardt
cs.AI

摘要

神经辐射场提供了最先进的视图合成质量,但渲染速度较慢。一个原因是它们利用体渲染,在渲染时需要每条光线多次采样(和模型查询)。尽管这种表示灵活且易于优化,但大多数现实世界的物体可以更有效地用表面而不是体来建模,每条光线需要的样本要少得多。这一观察结果促使表面表示法(如符号距离函数)取得了相当大的进展,但这些表示法可能难以建模半透明和薄结构。我们提出了一种方法,HybridNeRF,它利用了两种表示法的优势,将大多数物体渲染为表面,同时对(通常)少量具有挑战性的区域进行体建模。我们针对具有挑战性的Eyeful Tower数据集以及其他常用的视图合成数据集评估了HybridNeRF。与最先进的基准线(包括最近的光栅化方法)进行比较时,我们将错误率提高了15-30%,同时实现了虚拟现实分辨率(2Kx2K)的实时帧速率(至少36 FPS)。
English
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering, thus requiring many samples (and model queries) per ray at render time. Although this representation is flexible and easy to optimize, most real-world objects can be modeled more efficiently with surfaces instead of volumes, requiring far fewer samples per ray. This observation has spurred considerable progress in surface representations such as signed distance functions, but these may struggle to model semi-opaque and thin structures. We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces while modeling the (typically) small fraction of challenging regions volumetrically. We evaluate HybridNeRF against the challenging Eyeful Tower dataset along with other commonly used view synthesis datasets. When comparing to state-of-the-art baselines, including recent rasterization-based approaches, we improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K).
PDF70December 15, 2024