ChatPaper.aiChatPaper

COLMAP-自由3D高斯飘带

COLMAP-Free 3D Gaussian Splatting

December 12, 2023
作者: Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, Xiaolong Wang
cs.AI

摘要

尽管神经渲染在场景重建和新视角合成方面取得了令人瞩目的进展,但它严重依赖精确预先计算的摄像机姿态。为放宽这一约束,已经做出了多项努力,以训练无需预处理摄像机姿态的神经辐射场(NeRFs)。然而,NeRFs的隐式表示提供了额外挑战,即同时优化3D结构和摄像机姿态。另一方面,最近提出的3D高斯飞溅提供了新的机会,因为它具有显式的点云表示。本文利用显式几何表示和输入视频流的连续性,执行新视角合成而无需任何SfM预处理。我们按顺序处理输入帧,并通过逐帧增长3D高斯集合,而无需预先计算摄像机姿态。我们的方法在大运动变化下的视角合成和摄像机姿态估计方面明显优于先前的方法。我们的项目页面是https://oasisyang.github.io/colmap-free-3dgs
English
While neural rendering has led to impressive advances in scene reconstruction and novel view synthesis, it relies heavily on accurately pre-computed camera poses. To relax this constraint, multiple efforts have been made to train Neural Radiance Fields (NeRFs) without pre-processed camera poses. However, the implicit representations of NeRFs provide extra challenges to optimize the 3D structure and camera poses at the same time. On the other hand, the recently proposed 3D Gaussian Splatting provides new opportunities given its explicit point cloud representations. This paper leverages both the explicit geometric representation and the continuity of the input video stream to perform novel view synthesis without any SfM preprocessing. We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time, without the need to pre-compute the camera poses. Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes. Our project page is https://oasisyang.github.io/colmap-free-3dgs
PDF150December 15, 2024