混合3D-4D高斯溅射技术用于快速动态场景表示
Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene Representation
May 19, 2025
作者: Seungjun Oh, Younggeun Lee, Hyejin Jeon, Eunbyung Park
cs.AI
摘要
动态3D场景重建领域的最新进展已展现出令人瞩目的成果,实现了具有更高时间一致性的高保真3D新视角合成。其中,4D高斯泼溅(4DGS)因其能够精准建模高保真的空间与时间变化而成为一种颇具吸引力的方法。然而,现有技术因在静态区域冗余分配4D高斯而面临显著的计算与内存开销,这还可能降低图像质量。本研究中,我们提出了混合3D-4D高斯泼溅(3D-4DGS)这一创新框架,它自适应地采用3D高斯表示静态区域,同时保留4D高斯用于动态元素。我们的方法始于一个完全的4D高斯表示,并迭代地将时间不变的高斯转换为3D,大幅减少了参数数量并提升了计算效率。与此同时,动态高斯保持其完整的4D表示,以高保真度捕捉复杂运动。与基线4D高斯泼溅方法相比,我们的方法在保持或提升视觉质量的同时,显著缩短了训练时间。
English
Recent advancements in dynamic 3D scene reconstruction have shown promising
results, enabling high-fidelity 3D novel view synthesis with improved temporal
consistency. Among these, 4D Gaussian Splatting (4DGS) has emerged as an
appealing approach due to its ability to model high-fidelity spatial and
temporal variations. However, existing methods suffer from substantial
computational and memory overhead due to the redundant allocation of 4D
Gaussians to static regions, which can also degrade image quality. In this
work, we introduce hybrid 3D-4D Gaussian Splatting (3D-4DGS), a novel framework
that adaptively represents static regions with 3D Gaussians while reserving 4D
Gaussians for dynamic elements. Our method begins with a fully 4D Gaussian
representation and iteratively converts temporally invariant Gaussians into 3D,
significantly reducing the number of parameters and improving computational
efficiency. Meanwhile, dynamic Gaussians retain their full 4D representation,
capturing complex motions with high fidelity. Our approach achieves
significantly faster training times compared to baseline 4D Gaussian Splatting
methods while maintaining or improving the visual quality.Summary
AI-Generated Summary