MotionGS:探索用于可变形3D高斯飞溅的显式运动引导
MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting
October 10, 2024
作者: Ruijie Zhu, Yanzhe Liang, Hanzhi Chang, Jiacheng Deng, Jiahao Lu, Wenfei Yang, Tianzhu Zhang, Yongdong Zhang
cs.AI
摘要
动态场景重建是三维视觉领域的长期挑战。最近,3D高斯飘雪的出现为这一问题提供了新的见解。尽管随后的努力迅速将静态3D高斯扩展到动态场景,但它们通常缺乏对物体运动的明确约束,导致优化困难和性能下降。为解决上述问题,我们提出了一种新颖的可变形3D高斯飘雪框架MotionGS,它探索了明确的运动先验来引导3D高斯的变形。具体而言,我们首先引入了一个光流解耦模块,将光流解耦为相机流和运动流,分别对应相机运动和物体运动。然后,运动流可以有效约束3D高斯的变形,从而模拟动态物体的运动。此外,我们提出了一个相机姿态优化模块,交替优化3D高斯和相机姿态,减轻不准确相机姿态的影响。在单目动态场景中的大量实验证明,MotionGS超越了最先进的方法,在定性和定量结果上表现出显著优势。项目页面:https://ruijiezhu94.github.io/MotionGS_page
English
Dynamic scene reconstruction is a long-term challenge in the field of 3D
vision. Recently, the emergence of 3D Gaussian Splatting has provided new
insights into this problem. Although subsequent efforts rapidly extend static
3D Gaussian to dynamic scenes, they often lack explicit constraints on object
motion, leading to optimization difficulties and performance degradation. To
address the above issues, we propose a novel deformable 3D Gaussian splatting
framework called MotionGS, which explores explicit motion priors to guide the
deformation of 3D Gaussians. Specifically, we first introduce an optical flow
decoupling module that decouples optical flow into camera flow and motion flow,
corresponding to camera movement and object motion respectively. Then the
motion flow can effectively constrain the deformation of 3D Gaussians, thus
simulating the motion of dynamic objects. Additionally, a camera pose
refinement module is proposed to alternately optimize 3D Gaussians and camera
poses, mitigating the impact of inaccurate camera poses. Extensive experiments
in the monocular dynamic scenes validate that MotionGS surpasses
state-of-the-art methods and exhibits significant superiority in both
qualitative and quantitative results. Project page:
https://ruijiezhu94.github.io/MotionGS_pageSummary
AI-Generated Summary