Distilled-3DGS:蒸馏式3D高斯溅射
Distilled-3DGS:Distilled 3D Gaussian Splatting
August 19, 2025
作者: Lintao Xiang, Xinkai Chen, Jianhuang Lai, Guangcong Wang
cs.AI
摘要
3D高斯泼溅(3DGS)在新视角合成(NVS)中展现了卓越的效果。然而,它存在一个显著缺陷:实现高保真渲染通常需要大量3D高斯分布,导致内存消耗和存储需求巨大。为应对这一挑战,我们提出了首个针对3DGS的知识蒸馏框架,该框架包含多种教师模型,如基础3DGS、噪声增强变体及dropout正则化版本。这些教师模型的输出被整合以指导轻量级学生模型的优化。为提炼隐含的几何结构,我们提出了一种结构相似性损失,以增强学生模型与教师模型之间空间几何分布的一致性。通过跨多个数据集的全面定量与定性评估,所提出的Distilled-3DGS框架,虽简洁却高效,在渲染质量与存储效率方面相比现有最先进方法均取得了令人瞩目的成果。项目页面:https://distilled3dgs.github.io。代码:https://github.com/lt-xiang/Distilled-3DGS。
English
3D Gaussian Splatting (3DGS) has exhibited remarkable efficacy in novel view
synthesis (NVS). However, it suffers from a significant drawback: achieving
high-fidelity rendering typically necessitates a large number of 3D Gaussians,
resulting in substantial memory consumption and storage requirements. To
address this challenge, we propose the first knowledge distillation framework
for 3DGS, featuring various teacher models, including vanilla 3DGS,
noise-augmented variants, and dropout-regularized versions. The outputs of
these teachers are aggregated to guide the optimization of a lightweight
student model. To distill the hidden geometric structure, we propose a
structural similarity loss to boost the consistency of spatial geometric
distributions between the student and teacher model. Through comprehensive
quantitative and qualitative evaluations across diverse datasets, the proposed
Distilled-3DGS, a simple yet effective framework without bells and whistles,
achieves promising rendering results in both rendering quality and storage
efficiency compared to state-of-the-art methods. Project page:
https://distilled3dgs.github.io . Code:
https://github.com/lt-xiang/Distilled-3DGS .