MeshSplat:基於高斯噴塗的通用稀疏視角表面重建
MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting
August 25, 2025
作者: Hanzhi Chang, Ruijie Zhu, Wenjie Chang, Mulin Yu, Yanzhe Liang, Jiahao Lu, Zhuoyuan Li, Tianzhu Zhang
cs.AI
摘要
表面重建在计算机视觉和图形学领域已被广泛研究。然而,现有的表面重建方法在输入视角极其稀疏的情况下,难以恢复精确的场景几何。为解决这一问题,我们提出了MeshSplat,一种基于高斯抛射的可泛化稀疏视角表面重建框架。我们的核心思想是利用2D高斯抛射(2DGS)作为桥梁,将新视角合成与学习到的几何先验连接起来,进而将这些先验转移以实现表面重建。具体而言,我们引入了一个前馈网络来预测每视角像素对齐的2DGS,这使得网络能够合成新视角图像,从而无需直接的三维真实值监督。为了提高2DGS位置和方向预测的准确性,我们提出了加权Chamfer距离损失来正则化深度图,特别是在输入视角的重叠区域,并引入了一个法线预测网络,以将2DGS的方向与单目法线估计器预测的法向量对齐。大量实验验证了我们所提出改进的有效性,表明我们的方法在可泛化稀疏视角网格重建任务中达到了最先进的性能。项目页面:https://hanzhichang.github.io/meshsplat_web
English
Surface reconstruction has been widely studied in computer vision and
graphics. However, existing surface reconstruction works struggle to recover
accurate scene geometry when the input views are extremely sparse. To address
this issue, we propose MeshSplat, a generalizable sparse-view surface
reconstruction framework via Gaussian Splatting. Our key idea is to leverage
2DGS as a bridge, which connects novel view synthesis to learned geometric
priors and then transfers these priors to achieve surface reconstruction.
Specifically, we incorporate a feed-forward network to predict per-view
pixel-aligned 2DGS, which enables the network to synthesize novel view images
and thus eliminates the need for direct 3D ground-truth supervision. To improve
the accuracy of 2DGS position and orientation prediction, we propose a Weighted
Chamfer Distance Loss to regularize the depth maps, especially in overlapping
areas of input views, and also a normal prediction network to align the
orientation of 2DGS with normal vectors predicted by a monocular normal
estimator. Extensive experiments validate the effectiveness of our proposed
improvement, demonstrating that our method achieves state-of-the-art
performance in generalizable sparse-view mesh reconstruction tasks. Project
Page: https://hanzhichang.github.io/meshsplat_web