M^3:密集匹配融合多视角基础模型,实现单目高斯溅射SLAM
M^3: Dense Matching Meets Multi-View Foundation Models for Monocular Gaussian Splatting SLAM
March 17, 2026
作者: Kerui Ren, Guanghao Li, Changjian Jiang, Yingxiang Xu, Tao Lu, Linning Xu, Junting Dong, Jiangmiao Pang, Mulin Yu, Bo Dai
cs.AI
摘要
基于未标定单目视频的实时三维重建仍面临挑战,该任务需在动态环境中同时实现高精度位姿估计与计算高效的在线优化。尽管将三维基础模型与SLAM框架结合是前景广阔的范式,但核心瓶颈依然存在:多数多视图基础模型以前馈方式估计位姿,生成的像素级对应关系难以满足严格几何优化的精度要求。为此,我们提出M³模型,通过为多视图基础模型增设专用匹配头来获取细粒度稠密对应关系,并将其集成至鲁棒的单目高斯溅射SLAM系统中。M³还引入动态区域抑制与跨帧内参对齐机制以提升跟踪稳定性。在多种室内外基准测试上的大量实验表明,该方法在位姿估计与场景重建方面均达到最先进精度。值得注意的是,在ScanNet++数据集上,M³的绝对轨迹误差均方根值较VGGT-SLAM 2.0降低64.3%,峰值信噪比指标较ARTDECO提升2.11 dB。
English
Streaming reconstruction from uncalibrated monocular video remains challenging, as it requires both high-precision pose estimation and computationally efficient online refinement in dynamic environments. While coupling 3D foundation models with SLAM frameworks is a promising paradigm, a critical bottleneck persists: most multi-view foundation models estimate poses in a feed-forward manner, yielding pixel-level correspondences that lack the requisite precision for rigorous geometric optimization. To address this, we present M^3, which augments the Multi-view foundation model with a dedicated Matching head to facilitate fine-grained dense correspondences and integrates it into a robust Monocular Gaussian Splatting SLAM. M^3 further enhances tracking stability by incorporating dynamic area suppression and cross-inference intrinsic alignment. Extensive experiments on diverse indoor and outdoor benchmarks demonstrate state-of-the-art accuracy in both pose estimation and scene reconstruction. Notably, M^3 reduces ATE RMSE by 64.3% compared to VGGT-SLAM 2.0 and outperforms ARTDECO by 2.11 dB in PSNR on the ScanNet++ dataset.