ChatPaper.aiChatPaper

MotionGS:探索對可變形3D高斯塗抹進行明確運動引導

MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting

October 10, 2024
作者: Ruijie Zhu, Yanzhe Liang, Hanzhi Chang, Jiacheng Deng, Jiahao Lu, Wenfei Yang, Tianzhu Zhang, Yongdong Zhang
cs.AI

摘要

在3D視覺領域,動態場景重建一直是一個長期的挑戰。最近,3D高斯飄點技術的出現為這個問題提供了新的見解。儘管後續努力迅速將靜態3D高斯擴展到動態場景,但它們通常缺乏對物體運動的明確約束,導致優化困難和性能下降。為了解決上述問題,我們提出了一種新穎的可變形3D高斯飄點框架,稱為MotionGS,它探索明確的運動先驗來引導3D高斯的變形。具體而言,我們首先引入了一個光流解耦模塊,將光流解耦為相機光流和運動光流,分別對應相機運動和物體運動。然後,運動光流可以有效地約束3D高斯的變形,從而模擬動態物體的運動。此外,我們提出了一個相機姿態精化模塊,交替優化3D高斯和相機姿態,減輕不準確相機姿態的影響。在單眼動態場景中進行的大量實驗驗證了MotionGS超越了最先進的方法,在定性和定量結果上表現出顯著的優越性。項目頁面:https://ruijiezhu94.github.io/MotionGS_page
English
Dynamic scene reconstruction is a long-term challenge in the field of 3D vision. Recently, the emergence of 3D Gaussian Splatting has provided new insights into this problem. Although subsequent efforts rapidly extend static 3D Gaussian to dynamic scenes, they often lack explicit constraints on object motion, leading to optimization difficulties and performance degradation. To address the above issues, we propose a novel deformable 3D Gaussian splatting framework called MotionGS, which explores explicit motion priors to guide the deformation of 3D Gaussians. Specifically, we first introduce an optical flow decoupling module that decouples optical flow into camera flow and motion flow, corresponding to camera movement and object motion respectively. Then the motion flow can effectively constrain the deformation of 3D Gaussians, thus simulating the motion of dynamic objects. Additionally, a camera pose refinement module is proposed to alternately optimize 3D Gaussians and camera poses, mitigating the impact of inaccurate camera poses. Extensive experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods and exhibits significant superiority in both qualitative and quantitative results. Project page: https://ruijiezhu94.github.io/MotionGS_page

Summary

AI-Generated Summary

PDF32November 16, 2024