FreeTimeGS:動態場景重建中的隨時隨地自由高斯分佈
FreeTimeGS: Free Gaussians at Anytime and Anywhere for Dynamic Scene Reconstruction
June 5, 2025
作者: Yifan Wang, Peishan Yang, Zhen Xu, Jiaming Sun, Zhanhua Zhang, Yong Chen, Hujun Bao, Sida Peng, Xiaowei Zhou
cs.AI
摘要
本文探討了重建具有複雜運動的動態3D場景的挑戰。近期一些研究在規範空間中定義了3D高斯基元,並利用變形場將這些基元映射到觀測空間,從而實現了實時的動態視圖合成。然而,由於優化變形場的難度,這些方法在處理具有複雜運動的場景時往往表現不佳。為解決這一問題,我們提出了FreeTimeGS,這是一種新穎的4D表示方法,允許高斯基元在任意時間和位置出現。與規範高斯基元相比,我們的表示具有更強的靈活性,從而提升了對動態3D場景的建模能力。此外,我們為每個高斯基元賦予了運動函數,使其能夠隨時間推移移動到相鄰區域,這減少了時間上的冗餘。在多個數據集上的實驗結果表明,我們的方法在渲染質量上大幅超越了近期的方法。
English
This paper addresses the challenge of reconstructing dynamic 3D scenes with
complex motions. Some recent works define 3D Gaussian primitives in the
canonical space and use deformation fields to map canonical primitives to
observation spaces, achieving real-time dynamic view synthesis. However, these
methods often struggle to handle scenes with complex motions due to the
difficulty of optimizing deformation fields. To overcome this problem, we
propose FreeTimeGS, a novel 4D representation that allows Gaussian primitives
to appear at arbitrary time and locations. In contrast to canonical Gaussian
primitives, our representation possesses the strong flexibility, thus improving
the ability to model dynamic 3D scenes. In addition, we endow each Gaussian
primitive with an motion function, allowing it to move to neighboring regions
over time, which reduces the temporal redundancy. Experiments results on
several datasets show that the rendering quality of our method outperforms
recent methods by a large margin.