静态和动态辐射场的紧凑3D高斯飞溅
Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields
August 7, 2024
作者: Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, Eunbyung Park
cs.AI
摘要
最近,3D高斯喷洒(3DGS)作为一种新兴的表示形式出现,利用基于3D高斯的表示并引入了近似体积渲染,实现了非常快速的渲染速度和有前景的图像质量。此外,随后的研究成功地将3DGS扩展到动态3D场景,展示了其广泛的应用范围。然而,一个重要的缺点是,3DGS及其后续方法需要大量的高斯点来保持渲染图像的高保真度,这需要大量的内存和存储空间。为了解决这一关键问题,我们特别强调两个关键目标:减少高斯点的数量而不牺牲性能,并压缩高斯属性,如视角相关颜色和协方差。为此,我们提出了一种可学习的掩模策略,显著减少了高斯点的数量,同时保持了高性能。此外,我们提出了一种紧凑而有效的视角相关颜色表示,采用基于网格的神经场,而不是依赖于球谐函数。最后,我们通过残差向量量化学习码书,紧凑地表示几何和时间属性。通过模型压缩技术,如量化和熵编码,我们相对于静态场景的3DGS一直展示了超过25倍的存储减少和增强的渲染速度,同时保持了场景表示的质量。对于动态场景,我们的方法实现了超过12倍的存储效率,并与现有的最先进方法相比保持了高质量的重建。我们的工作为3D场景表示提供了一个全面的框架,实现了高性能、快速训练、紧凑性和实时渲染。我们的项目页面位于https://maincold2.github.io/c3dgs/。
English
3D Gaussian splatting (3DGS) has recently emerged as an alternative
representation that leverages a 3D Gaussian-based representation and introduces
an approximated volumetric rendering, achieving very fast rendering speed and
promising image quality. Furthermore, subsequent studies have successfully
extended 3DGS to dynamic 3D scenes, demonstrating its wide range of
applications. However, a significant drawback arises as 3DGS and its following
methods entail a substantial number of Gaussians to maintain the high fidelity
of the rendered images, which requires a large amount of memory and storage. To
address this critical issue, we place a specific emphasis on two key
objectives: reducing the number of Gaussian points without sacrificing
performance and compressing the Gaussian attributes, such as view-dependent
color and covariance. To this end, we propose a learnable mask strategy that
significantly reduces the number of Gaussians while preserving high
performance. In addition, we propose a compact but effective representation of
view-dependent color by employing a grid-based neural field rather than relying
on spherical harmonics. Finally, we learn codebooks to compactly represent the
geometric and temporal attributes by residual vector quantization. With model
compression techniques such as quantization and entropy coding, we consistently
show over 25x reduced storage and enhanced rendering speed compared to 3DGS for
static scenes, while maintaining the quality of the scene representation. For
dynamic scenes, our approach achieves more than 12x storage efficiency and
retains a high-quality reconstruction compared to the existing state-of-the-art
methods. Our work provides a comprehensive framework for 3D scene
representation, achieving high performance, fast training, compactness, and
real-time rendering. Our project page is available at
https://maincold2.github.io/c3dgs/.Summary
AI-Generated Summary