重新审视前馈式3D高斯溅射的深度表示方法
Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting
June 5, 2025
作者: Duochao Shi, Weijie Wang, Donny Y. Chen, Zeyu Zhang, Jia-Wang Bian, Bohan Zhuang, Chunhua Shen
cs.AI
摘要
深度图在基于前馈的三维高斯泼溅(3DGS)流程中被广泛应用,通过将其反投影为三维点云以进行新视角合成。该方法具有训练效率高、可利用已知相机位姿以及几何估计精确等优势。然而,物体边界处的深度不连续性常导致点云碎片化或稀疏,从而降低渲染质量——这是基于深度表示的一个公认局限。为解决这一问题,我们提出了PM-Loss,一种基于预训练变换器预测的点图的新型正则化损失。尽管点图本身的准确性可能不及深度图,但它能有效增强几何平滑性,特别是在物体边界周围。借助改进后的深度图,我们的方法显著提升了不同架构和场景下的前馈3DGS,实现了更为一致的优质渲染效果。项目页面:https://aim-uofa.github.io/PMLoss
English
Depth maps are widely used in feed-forward 3D Gaussian Splatting (3DGS)
pipelines by unprojecting them into 3D point clouds for novel view synthesis.
This approach offers advantages such as efficient training, the use of known
camera poses, and accurate geometry estimation. However, depth discontinuities
at object boundaries often lead to fragmented or sparse point clouds, degrading
rendering quality -- a well-known limitation of depth-based representations. To
tackle this issue, we introduce PM-Loss, a novel regularization loss based on a
pointmap predicted by a pre-trained transformer. Although the pointmap itself
may be less accurate than the depth map, it effectively enforces geometric
smoothness, especially around object boundaries. With the improved depth map,
our method significantly improves the feed-forward 3DGS across various
architectures and scenes, delivering consistently better rendering results. Our
project page: https://aim-uofa.github.io/PMLoss