ChatPaper.aiChatPaper

DiET-GS:基於擴散先驗與事件流輔助的運動去模糊三維高斯潑濺

DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting

March 31, 2025
作者: Seungjun Lee, Gim Hee Lee
cs.AI

摘要

從模糊的多視角圖像重建清晰的三維表徵一直是計算機視覺領域的長期難題。近期研究嘗試通過利用事件相機來提升從運動模糊中生成高質量新視角圖像的能力,這得益於其高動態範圍和微秒級的時間分辨率。然而,這些方法在恢復不準確的顏色或丟失細粒度細節方面往往表現欠佳。本文提出了DiET-GS,一種基於擴散先驗和事件流輔助的運動去模糊3D高斯散射(3DGS)框架。我們的框架在兩階段訓練策略中有效結合了無模糊的事件流和擴散先驗。具體而言,我們引入了一種新穎的框架,通過事件雙重積分來約束3DGS,從而實現準確的顏色和清晰的細節。此外,我們提出了一種簡單的技術,利用擴散先驗進一步增強邊緣細節。在合成數據和真實數據上的定性和定量結果表明,與現有的基線方法相比,我們的DiET-GS能夠生成顯著更高質量的新視角圖像。我們的項目頁面是https://diet-gs.github.io。
English
Reconstructing sharp 3D representations from blurry multi-view images are long-standing problem in computer vision. Recent works attempt to enhance high-quality novel view synthesis from the motion blur by leveraging event-based cameras, benefiting from high dynamic range and microsecond temporal resolution. However, they often reach sub-optimal visual quality in either restoring inaccurate color or losing fine-grained details. In this paper, we present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS. Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy. Specifically, we introduce the novel framework to constraint 3DGS with event double integral, achieving both accurate color and well-defined details. Additionally, we propose a simple technique to leverage diffusion prior to further enhance the edge details. Qualitative and quantitative results on both synthetic and real-world data demonstrate that our DiET-GS is capable of producing significantly better quality of novel views compared to the existing baselines. Our project page is https://diet-gs.github.io
PDF32April 2, 2025