ChatPaper.aiChatPaper

DyBluRF:模糊單眼視頻的動態去模糊神經輻射場

DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video

December 21, 2023
作者: Minh-Quan Viet Bui, Jongmin Park, Jihyong Oh, Munchurl Kim
cs.AI

摘要

影片視角合成能從任意視角和時間創建視覺上吸引人的畫面,提供身臨其境的觀賞體驗。神經輻射場,尤其是最初為靜態場景開發的 NeRF,已促使各種影片視角合成方法的誕生。然而,影片視角合成的挑戰來自運動模糊,這是由於物體或攝影機在曝光期間移動而導致的,這會妨礙對銳利時空視角的精確合成。為此,我們提出了一種針對模糊單眼影片的新型動態去模糊 NeRF 框架,稱為 DyBluRF,包括交錯射線細化(IRR)階段和基於運動分解的去模糊(MDD)階段。我們的 DyBluRF 是首個針對模糊單眼影片進行新視角合成的方法。IRR 階段聯合重建動態 3D 場景並改進不準確的攝影機姿勢信息,以對抗從給定模糊幀中提取的不精確姿勢信息。MDD 階段是一種新型的模糊單眼影片幀的增量潛在銳利射線預測(ILSP)方法,通過將潛在銳利射線分解為全局攝影機運動和局部物體運動分量。廣泛的實驗結果表明,我們的 DyBluRF 在質量和量化上優於最近的最先進方法。我們的項目頁面包括源代碼和預訓練模型,可在 https://kaist-viclab.github.io/dyblurf-site/ 公開獲取。
English
Video view synthesis, allowing for the creation of visually appealing frames from arbitrary viewpoints and times, offers immersive viewing experiences. Neural radiance fields, particularly NeRF, initially developed for static scenes, have spurred the creation of various methods for video view synthesis. However, the challenge for video view synthesis arises from motion blur, a consequence of object or camera movement during exposure, which hinders the precise synthesis of sharp spatio-temporal views. In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of an Interleave Ray Refinement (IRR) stage and a Motion Decomposition-based Deblurring (MDD) stage. Our DyBluRF is the first that addresses and handles the novel view synthesis for blurry monocular video. The IRR stage jointly reconstructs dynamic 3D scenes and refines the inaccurate camera pose information to combat imprecise pose information extracted from the given blurry frames. The MDD stage is a novel incremental latent sharp-rays prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components. Extensive experimental results demonstrate that our DyBluRF outperforms qualitatively and quantitatively the very recent state-of-the-art methods. Our project page including source codes and pretrained model are publicly available at https://kaist-viclab.github.io/dyblurf-site/.
PDF81December 15, 2024