ChatPaper.aiChatPaper

蒸餾式3D高斯潑濺:精煉三維高斯散點技術

Distilled-3DGS:Distilled 3D Gaussian Splatting

August 19, 2025
作者: Lintao Xiang, Xinkai Chen, Jianhuang Lai, Guangcong Wang
cs.AI

摘要

三維高斯潑濺(3DGS)在新視角合成(NVS)中展現了顯著的效能。然而,其存在一個顯著缺陷:實現高保真渲染通常需要大量的三維高斯分佈,導致記憶體消耗與儲存需求大幅增加。為應對這一挑戰,我們提出了首個針對3DGS的知識蒸餾框架,該框架包含多種教師模型,如基礎3DGS、噪聲增強變體及丟棄正則化版本。這些教師模型的輸出被整合以指導輕量級學生模型的優化。為蒸餾隱藏的幾何結構,我們提出了一種結構相似性損失,以增強學生與教師模型間空間幾何分佈的一致性。通過跨多樣數據集的全面定量與定性評估,所提出的Distilled-3DGS,這一簡潔而有效的框架,在渲染質量與儲存效率上相較於現有技術方法,均取得了令人鼓舞的渲染成果。項目頁面:https://distilled3dgs.github.io。代碼:https://github.com/lt-xiang/Distilled-3DGS。
English
3D Gaussian Splatting (3DGS) has exhibited remarkable efficacy in novel view synthesis (NVS). However, it suffers from a significant drawback: achieving high-fidelity rendering typically necessitates a large number of 3D Gaussians, resulting in substantial memory consumption and storage requirements. To address this challenge, we propose the first knowledge distillation framework for 3DGS, featuring various teacher models, including vanilla 3DGS, noise-augmented variants, and dropout-regularized versions. The outputs of these teachers are aggregated to guide the optimization of a lightweight student model. To distill the hidden geometric structure, we propose a structural similarity loss to boost the consistency of spatial geometric distributions between the student and teacher model. Through comprehensive quantitative and qualitative evaluations across diverse datasets, the proposed Distilled-3DGS, a simple yet effective framework without bells and whistles, achieves promising rendering results in both rendering quality and storage efficiency compared to state-of-the-art methods. Project page: https://distilled3dgs.github.io . Code: https://github.com/lt-xiang/Distilled-3DGS .
PDF52August 25, 2025