TRIPS:用于实时辐射场渲染的三线性点喷洒
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
January 11, 2024
作者: Linus Franke, Darius Rückert, Laura Fink, Marc Stamminger
cs.AI
摘要
基于点的辐射场渲染在新视角合成方面展示出令人印象深刻的结果,提供了渲染质量和计算效率的引人注目的融合。然而,在这一领域最新的方法也并非没有缺点。3D 高斯喷洒[Kerbl 和 Kopanas 等人,2023]在渲染高度详细场景时存在困难,因为模糊和云状伪影。另一方面,ADOP[Rückert 等人,2022]可以生成更清晰的图像,但神经重建网络降低了性能,它面临着时间不稳定性的挑战,并且无法有效地处理点云中的大间隙。
在本文中,我们提出了 TRIPS(三线性点喷洒),这是一种结合了高斯喷洒和 ADOP 的思想的方法。我们的新技术背后的基本概念涉及将点光栅化为屏幕空间图像金字塔,金字塔层的选择取决于投影点的大小。这种方法允许使用单个三线性写入渲染任意大的点。然后使用轻量级神经网络重建一个无洞的图像,包括超出喷洒分辨率的细节。重要的是,我们的渲染管线是完全可微的,可以自动优化点的大小和位置。
我们的评估表明,TRIPS 在渲染质量方面超越了现有的最先进方法,同时在现有硬件上保持了每秒 60 帧的实时帧率。这种性能扩展到具有复杂几何、广阔景观和自动曝光镜头的挑战性场景。
English
Point-based radiance field rendering has demonstrated impressive results for
novel view synthesis, offering a compelling blend of rendering quality and
computational efficiency. However, also latest approaches in this domain are
not without their shortcomings. 3D Gaussian Splatting [Kerbl and Kopanas et al.
2023] struggles when tasked with rendering highly detailed scenes, due to
blurring and cloudy artifacts. On the other hand, ADOP [R\"uckert et al. 2022]
can accommodate crisper images, but the neural reconstruction network decreases
performance, it grapples with temporal instability and it is unable to
effectively address large gaps in the point cloud.
In this paper, we present TRIPS (Trilinear Point Splatting), an approach that
combines ideas from both Gaussian Splatting and ADOP. The fundamental concept
behind our novel technique involves rasterizing points into a screen-space
image pyramid, with the selection of the pyramid layer determined by the
projected point size. This approach allows rendering arbitrarily large points
using a single trilinear write. A lightweight neural network is then used to
reconstruct a hole-free image including detail beyond splat resolution.
Importantly, our render pipeline is entirely differentiable, allowing for
automatic optimization of both point sizes and positions.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art
methods in terms of rendering quality while maintaining a real-time frame rate
of 60 frames per second on readily available hardware. This performance extends
to challenging scenarios, such as scenes featuring intricate geometry,
expansive landscapes, and auto-exposed footage.