ChatPaper.aiChatPaper

CRiM-GS:從運動模糊影像中連續剛性運動感知高斯飛濺

CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images

July 4, 2024
作者: Junghe Lee, Donghyeong Kim, Dogyoon Lee, Suhwan Cho, Sangyoun Lee
cs.AI

摘要

神經輻射場(NeRFs)因其高質量的新視角渲染能力而受到重視,促使研究人員致力於應對各種真實世界情況。一個關鍵挑戰是相機運動模糊,由於曝光時間內相機移動引起,導致無法準確重建3D場景。在本研究中,我們提出了連續剛性運動感知高斯飛濺(CRiM-GS)方法,以實時渲染速度從模糊影像中重建準確的3D場景。考慮到實際相機運動模糊過程,其中包含複雜的運動模式,我們基於神經常微分方程(ODEs)預測相機的連續運動。具體來說,我們利用剛性變換來模擬相機運動,並進行適當的正則化,保留物體的形狀和大小。此外,我們在SE(3)場中引入了連續可變形的3D變換,通過確保更高的自由度,將剛性變換適應於現實問題。通過重新審視基本相機理論並應用先進的神經網絡訓練技術,我們實現了對連續相機軌跡的準確建模。我們進行了大量實驗,在基準數據集上定量和定性地展示了最先進的性能。
English
Neural radiance fields (NeRFs) have received significant attention due to their high-quality novel view rendering ability, prompting research to address various real-world cases. One critical challenge is the camera motion blur caused by camera movement during exposure time, which prevents accurate 3D scene reconstruction. In this study, we propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed. Considering the actual camera motion blurring process, which consists of complex motion patterns, we predict the continuous movement of the camera based on neural ordinary differential equations (ODEs). Specifically, we leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object. Furthermore, we introduce a continuous deformable 3D transformation in the SE(3) field to adapt the rigid body transformation to real-world problems by ensuring a higher degree of freedom. By revisiting fundamental camera theory and employing advanced neural network training techniques, we achieve accurate modeling of continuous camera trajectories. We conduct extensive experiments, demonstrating state-of-the-art performance both quantitatively and qualitatively on benchmark datasets.

Summary

AI-Generated Summary

PDF91November 28, 2024