GauFRe:用於實時動態新視角合成的高斯變形場
GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
December 18, 2023
作者: Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao
cs.AI
摘要
我們提出了一種適用於單眼視頻的動態場景重建方法,使用可變形的三維高斯模型。在高斯飛灑技術的基礎上,我們的方法擴展了表示,以容納動態元素,通過一組可變形的高斯模型存在於一個標準空間中,以及由多層感知器(MLP)定義的時間依賴性變形場。此外,在假設大多數自然場景具有保持靜態的大區域的情況下,我們允許MLP通過另外包含一個靜態高斯點雲來集中其表示能力。串聯的動態和靜態點雲形成高斯飛灑光柵化器的輸入,實現實時渲染。可微分管道通過自監督渲染損失端對端進行優化。我們的方法實現了與最先進的動態神經輻射場方法可比的結果,同時實現了更快的優化和渲染。項目網站:https://lynl7130.github.io/gaufre/index.html
English
We propose a method for dynamic scene reconstruction using deformable 3D
Gaussians that is tailored for monocular video. Building upon the efficiency of
Gaussian splatting, our approach extends the representation to accommodate
dynamic elements via a deformable set of Gaussians residing in a canonical
space, and a time-dependent deformation field defined by a multi-layer
perceptron (MLP). Moreover, under the assumption that most natural scenes have
large regions that remain static, we allow the MLP to focus its
representational power by additionally including a static Gaussian point cloud.
The concatenated dynamic and static point clouds form the input for the
Gaussian Splatting rasterizer, enabling real-time rendering. The differentiable
pipeline is optimized end-to-end with a self-supervised rendering loss. Our
method achieves results that are comparable to state-of-the-art dynamic neural
radiance field methods while allowing much faster optimization and rendering.
Project website: https://lynl7130.github.io/gaufre/index.html