实时辐射场渲染的3D高斯飞溅
3D Gaussian Splatting for Real-Time Radiance Field Rendering
August 8, 2023
作者: Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis
cs.AI
摘要
辐射场方法最近彻底改变了利用多张照片或视频捕获的场景进行新视角合成的方式。然而,要实现高视觉质量仍需要耗时训练和渲染的神经网络,而最近更快的方法不可避免地在速度和质量之间进行权衡。对于无界完整场景(而非孤立对象)和1080p分辨率渲染,目前没有一种方法能够实现实时显示速率。我们引入了三个关键元素,使我们能够在保持竞争性训练时间的同时实现最先进的视觉质量,并且重要的是实现了高质量的实时(>=30 fps)1080p分辨率新视角合成。首先,从相机校准期间产生的稀疏点开始,我们用3D高斯模型表示场景,以保留连续体辐射场的理想特性,用于场景优化,同时避免在空白空间中进行不必要的计算;其次,我们对3D高斯模型执行交错优化/密度控制,特别是优化各向异性协方差以实现对场景的准确表示;第三,我们开发了一种快速的可见性感知渲染算法,支持各向异性飞溅,并加速训练,同时实现实时渲染。我们在几个已建立的数据集上展示了最先进的视觉质量和实时渲染。
English
Radiance Field methods have recently revolutionized novel-view synthesis of
scenes captured with multiple photos or videos. However, achieving high visual
quality still requires neural networks that are costly to train and render,
while recent faster methods inevitably trade off speed for quality. For
unbounded and complete scenes (rather than isolated objects) and 1080p
resolution rendering, no current method can achieve real-time display rates. We
introduce three key elements that allow us to achieve state-of-the-art visual
quality while maintaining competitive training times and importantly allow
high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution.
First, starting from sparse points produced during camera calibration, we
represent the scene with 3D Gaussians that preserve desirable properties of
continuous volumetric radiance fields for scene optimization while avoiding
unnecessary computation in empty space; Second, we perform interleaved
optimization/density control of the 3D Gaussians, notably optimizing
anisotropic covariance to achieve an accurate representation of the scene;
Third, we develop a fast visibility-aware rendering algorithm that supports
anisotropic splatting and both accelerates training and allows realtime
rendering. We demonstrate state-of-the-art visual quality and real-time
rendering on several established datasets.