ChatPaper.aiChatPaper

FreeTimeGS:动态场景重建中的随时随地自由高斯分布

FreeTimeGS: Free Gaussians at Anytime and Anywhere for Dynamic Scene Reconstruction

June 5, 2025
作者: Yifan Wang, Peishan Yang, Zhen Xu, Jiaming Sun, Zhanhua Zhang, Yong Chen, Hujun Bao, Sida Peng, Xiaowei Zhou
cs.AI

摘要

本文探讨了重建具有复杂运动动态3D场景的挑战。近期一些研究通过在规范空间中定义3D高斯基元,并利用变形场将规范基元映射至观测空间,实现了实时动态视图合成。然而,由于优化变形场的难度,这些方法在处理复杂运动场景时往往表现不佳。为解决这一问题,我们提出了FreeTimeGS,一种新颖的4D表示方法,允许高斯基元在任意时间和位置出现。与规范高斯基元相比,我们的表示具有更强的灵活性,从而提升了动态3D场景的建模能力。此外,我们为每个高斯基元赋予了一个运动函数,使其能够随时间推移移动到邻近区域,这减少了时间冗余。在多个数据集上的实验结果表明,我们的方法在渲染质量上大幅超越了近期的方法。
English
This paper addresses the challenge of reconstructing dynamic 3D scenes with complex motions. Some recent works define 3D Gaussian primitives in the canonical space and use deformation fields to map canonical primitives to observation spaces, achieving real-time dynamic view synthesis. However, these methods often struggle to handle scenes with complex motions due to the difficulty of optimizing deformation fields. To overcome this problem, we propose FreeTimeGS, a novel 4D representation that allows Gaussian primitives to appear at arbitrary time and locations. In contrast to canonical Gaussian primitives, our representation possesses the strong flexibility, thus improving the ability to model dynamic 3D scenes. In addition, we endow each Gaussian primitive with an motion function, allowing it to move to neighboring regions over time, which reduces the temporal redundancy. Experiments results on several datasets show that the rendering quality of our method outperforms recent methods by a large margin.
PDF51June 6, 2025