ChatPaper.aiChatPaper

CRiM-GS:来自运动模糊图像的连续刚体运动感知高斯飞溅

CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images

July 4, 2024
作者: Junghe Lee, Donghyeong Kim, Dogyoon Lee, Suhwan Cho, Sangyoun Lee
cs.AI

摘要

由于其高质量的新视角渲染能力,神经辐射场(NeRFs)受到了广泛关注,促使研究人员探讨各种真实世界案例。一个关键挑战是由于曝光时间内相机移动引起的相机运动模糊,这会阻碍准确的三维场景重建。在本研究中,我们提出了连续刚性运动感知高斯飞溅(CRiM-GS)方法,以从模糊图像中实时重建准确的三维场景。考虑到实际相机运动模糊过程中包含复杂的运动模式,我们基于神经常微分方程(ODEs)预测相机的连续运动。具体来说,我们利用刚体变换来建模相机运动,并进行适当的正则化,以保留物体的形状和大小。此外,我们在SE(3)场中引入连续可变形的三维变换,通过确保更高的自由度,使刚体变换适应真实世界问题。通过重新审视基本相机理论并采用先进的神经网络训练技术,我们实现了对连续相机轨迹的准确建模。我们进行了大量实验,在基准数据集上在定量和定性上展示了最先进的性能。
English
Neural radiance fields (NeRFs) have received significant attention due to their high-quality novel view rendering ability, prompting research to address various real-world cases. One critical challenge is the camera motion blur caused by camera movement during exposure time, which prevents accurate 3D scene reconstruction. In this study, we propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed. Considering the actual camera motion blurring process, which consists of complex motion patterns, we predict the continuous movement of the camera based on neural ordinary differential equations (ODEs). Specifically, we leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object. Furthermore, we introduce a continuous deformable 3D transformation in the SE(3) field to adapt the rigid body transformation to real-world problems by ensuring a higher degree of freedom. By revisiting fundamental camera theory and employing advanced neural network training techniques, we achieve accurate modeling of continuous camera trajectories. We conduct extensive experiments, demonstrating state-of-the-art performance both quantitatively and qualitatively on benchmark datasets.

Summary

AI-Generated Summary

PDF91November 28, 2024