ChatPaper.aiChatPaper

影像作為IMU:從單張運動模糊影像中估計相機運動

Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image

March 21, 2025
作者: Jerred Chen, Ronald Clark
cs.AI

摘要

在許多機器人與虛擬/擴增實境應用中,快速的相機運動會導致高度運動模糊,使得現有的相機姿態估計方法失效。本研究提出了一種新穎的框架,將運動模糊視為運動估計的豐富線索,而非將其視為不希望的干擾。我們的方法通過從單張運動模糊圖像直接預測密集的運動流場和單目深度圖來實現。隨後,在小運動假設下,通過求解線性最小二乘問題來恢復瞬時相機速度。本質上,我們的方法產生了一種類似IMU的測量值,能夠穩健地捕捉快速且劇烈的相機運動。為了訓練我們的模型,我們構建了一個大規模數據集,其中包含基於ScanNet++v2生成的逼真合成運動模糊,並通過使用我們完全可微分的管道在真實數據上進行端到端訓練來進一步精煉模型。在真實世界基準上的廣泛評估表明,我們的方法在角速度和平移速度估計上達到了最先進的水平,超越了如MASt3R和COLMAP等現有方法。
English
In many robotics and VR/AR applications, fast camera motions cause a high level of motion blur, causing existing camera pose estimation methods to fail. In this work, we propose a novel framework that leverages motion blur as a rich cue for motion estimation rather than treating it as an unwanted artifact. Our approach works by predicting a dense motion flow field and a monocular depth map directly from a single motion-blurred image. We then recover the instantaneous camera velocity by solving a linear least squares problem under the small motion assumption. In essence, our method produces an IMU-like measurement that robustly captures fast and aggressive camera movements. To train our model, we construct a large-scale dataset with realistic synthetic motion blur derived from ScanNet++v2 and further refine our model by training end-to-end on real data using our fully differentiable pipeline. Extensive evaluations on real-world benchmarks demonstrate that our method achieves state-of-the-art angular and translational velocity estimates, outperforming current methods like MASt3R and COLMAP.

Summary

AI-Generated Summary

PDF62March 27, 2025