ChatPaper.aiChatPaper

VastGaussian:用于大场景重建的大规模3D高斯函数

VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

February 27, 2024
作者: Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, Wenming Yang
cs.AI

摘要

现有基于NeRF的大场景重建方法通常在视觉质量和渲染速度方面存在局限性。虽然最近的3D高斯光斑方法在小规模和以物体为中心的场景上表现良好,但将其扩展到大场景会面临由于有限的视频内存、长时间优化和明显外观变化而带来的挑战。为了解决这些问题,我们提出了VastGaussian,这是基于3D高斯光斑的大场景高质量重建和实时渲染的首个方法。我们提出了一种渐进式分区策略,将大场景划分为多个单元,其中训练相机和点云根据空间感知可见性标准进行适当分布。这些单元在并行优化后合并为完整场景。我们还在优化过程中引入了解耦外观建模,以减少渲染图像中的外观变化。我们的方法优于现有基于NeRF的方法,并在多个大场景数据集上取得了最先进的结果,实现了快速优化和高保真实时渲染。
English
Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations. To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting. We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization. We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images. Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.
PDF1145December 15, 2024