混合式3D-4D高斯潑濺技術用於快速動態場景表示
Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene Representation
May 19, 2025
作者: Seungjun Oh, Younggeun Lee, Hyejin Jeon, Eunbyung Park
cs.AI
摘要
近期在動態3D場景重建領域的進展展現了令人鼓舞的成果,使得高保真度的3D新視角合成在時間一致性上得到了提升。其中,4D高斯潑濺(4DGS)因其能夠精確建模空間與時間變化的能力而成為一種引人注目的方法。然而,現有方法由於在靜態區域冗餘分配4D高斯而面臨顯著的計算與記憶體開銷,這也可能降低影像品質。在本研究中,我們提出了混合3D-4D高斯潑濺(3D-4DGS),這是一種新穎的框架,它自適應地使用3D高斯來表示靜態區域,同時保留4D高斯用於動態元素。我們的方法始於一個完全的4D高斯表示,並迭代地將時間不變的高斯轉換為3D,從而顯著減少參數數量並提升計算效率。與此同時,動態高斯保持其完整的4D表示,以高保真度捕捉複雜運動。與基準的4D高斯潑濺方法相比,我們的方法實現了顯著更快的訓練時間,同時保持或提升了視覺品質。
English
Recent advancements in dynamic 3D scene reconstruction have shown promising
results, enabling high-fidelity 3D novel view synthesis with improved temporal
consistency. Among these, 4D Gaussian Splatting (4DGS) has emerged as an
appealing approach due to its ability to model high-fidelity spatial and
temporal variations. However, existing methods suffer from substantial
computational and memory overhead due to the redundant allocation of 4D
Gaussians to static regions, which can also degrade image quality. In this
work, we introduce hybrid 3D-4D Gaussian Splatting (3D-4DGS), a novel framework
that adaptively represents static regions with 3D Gaussians while reserving 4D
Gaussians for dynamic elements. Our method begins with a fully 4D Gaussian
representation and iteratively converts temporally invariant Gaussians into 3D,
significantly reducing the number of parameters and improving computational
efficiency. Meanwhile, dynamic Gaussians retain their full 4D representation,
capturing complex motions with high fidelity. Our approach achieves
significantly faster training times compared to baseline 4D Gaussian Splatting
methods while maintaining or improving the visual quality.Summary
AI-Generated Summary