COLMAP-自由3D高斯飄點
COLMAP-Free 3D Gaussian Splatting
December 12, 2023
作者: Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, Xiaolong Wang
cs.AI
摘要
儘管神經渲染在場景重建和新視角合成方面取得了令人印象深刻的進展,但它在很大程度上依賴於準確預先計算的攝像機位置。為了放寬這種限制,已經做出了多項努力,以無需預處理攝像機位置的方式訓練神經輻射場(NeRFs)。然而,NeRFs的隱式表示提供了額外的挑戰,同時優化3D結構和攝像機位置。另一方面,最近提出的3D高斯點雲投影提供了新的機會,因為它具有明確的點雲表示。本文利用明確的幾何表示和輸入視頻流的連續性,執行新視角合成,無需任何SfM預處理。我們以順序方式處理輸入幀,並逐步擴展3D高斯集合,一次處理一個輸入幀,無需預先計算攝像機位置。我們的方法在大幅運動變化下的視角合成和攝像機姿態估計方面明顯優於先前的方法。我們的專案頁面為https://oasisyang.github.io/colmap-free-3dgs
English
While neural rendering has led to impressive advances in scene reconstruction
and novel view synthesis, it relies heavily on accurately pre-computed camera
poses. To relax this constraint, multiple efforts have been made to train
Neural Radiance Fields (NeRFs) without pre-processed camera poses. However, the
implicit representations of NeRFs provide extra challenges to optimize the 3D
structure and camera poses at the same time. On the other hand, the recently
proposed 3D Gaussian Splatting provides new opportunities given its explicit
point cloud representations. This paper leverages both the explicit geometric
representation and the continuity of the input video stream to perform novel
view synthesis without any SfM preprocessing. We process the input frames in a
sequential manner and progressively grow the 3D Gaussians set by taking one
input frame at a time, without the need to pre-compute the camera poses. Our
method significantly improves over previous approaches in view synthesis and
camera pose estimation under large motion changes. Our project page is
https://oasisyang.github.io/colmap-free-3dgs