ChatPaper.aiChatPaper

靜態和動態輝度場的緊湊三維高斯飛濺

Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields

August 7, 2024
作者: Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, Eunbyung Park
cs.AI

摘要

最近,3D 高斯飄點(3D Gaussian splatting,3DGS)作為一種替代表示方法嶄露頭角,利用基於 3D 高斯的表示並引入近似體積渲染,實現非常快速的渲染速度和有前途的圖像質量。此外,後續研究成功將 3DGS 擴展到動態 3D 場景,展示其廣泛的應用範圍。然而,一個重要的缺點是,3DGS 及其後續方法需要大量的高斯點來保持渲染圖像的高保真度,這需要大量的內存和存儲空間。為了解決這一關鍵問題,我們特別強調兩個關鍵目標:減少高斯點的數量而不影響性能,以及壓縮高斯屬性,如視角相依性顏色和協方差。為此,我們提出了一種可學習的遮罩策略,顯著減少高斯數量同時保持高性能。此外,我們提出了一種緊湊但有效的視角相依性顏色表示方法,採用基於網格的神經場,而不是依賴球面調和。最後,我們通過學習代碼書來緊湊表示幾何和時間屬性,採用殘差向量量化。通過模型壓縮技術,如量化和熵編碼,我們在靜態場景中展示了與 3DGS 相比超過 25 倍的存儲減少和增強的渲染速度,同時保持場景表示的質量。對於動態場景,我們的方法實現了超過 12 倍的存儲效率,並與現有最先進的方法相比保持高質量的重建。我們的工作為 3D 場景表示提供了一個全面的框架,實現高性能、快速訓練、緊湊性和實時渲染。我們的項目頁面位於 https://maincold2.github.io/c3dgs/。
English
3D Gaussian splatting (3DGS) has recently emerged as an alternative representation that leverages a 3D Gaussian-based representation and introduces an approximated volumetric rendering, achieving very fast rendering speed and promising image quality. Furthermore, subsequent studies have successfully extended 3DGS to dynamic 3D scenes, demonstrating its wide range of applications. However, a significant drawback arises as 3DGS and its following methods entail a substantial number of Gaussians to maintain the high fidelity of the rendered images, which requires a large amount of memory and storage. To address this critical issue, we place a specific emphasis on two key objectives: reducing the number of Gaussian points without sacrificing performance and compressing the Gaussian attributes, such as view-dependent color and covariance. To this end, we propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance. In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field rather than relying on spherical harmonics. Finally, we learn codebooks to compactly represent the geometric and temporal attributes by residual vector quantization. With model compression techniques such as quantization and entropy coding, we consistently show over 25x reduced storage and enhanced rendering speed compared to 3DGS for static scenes, while maintaining the quality of the scene representation. For dynamic scenes, our approach achieves more than 12x storage efficiency and retains a high-quality reconstruction compared to the existing state-of-the-art methods. Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering. Our project page is available at https://maincold2.github.io/c3dgs/.

Summary

AI-Generated Summary

PDF143November 28, 2024