ChatPaper.aiChatPaper

重探深度表徵於前饋式三維高斯潑濺之應用

Revisiting Depth Representations for Feed-Forward 3D Gaussian Splatting

June 5, 2025
作者: Duochao Shi, Weijie Wang, Donny Y. Chen, Zeyu Zhang, Jia-Wang Bian, Bohan Zhuang, Chunhua Shen
cs.AI

摘要

深度圖廣泛應用於前饋式三維高斯潑濺(3DGS)流程中,通過將其反投影為三維點雲以實現新視角合成。此方法具有訓練效率高、可利用已知相機姿態及精確幾何估計等優勢。然而,物體邊界處的深度不連續性常導致點雲斷裂或稀疏,從而降低渲染質量——這是基於深度表示方法的一個公認限制。為解決此問題,我們引入了PM-Loss,這是一種基於預訓練變壓器預測的點圖的新型正則化損失。儘管點圖本身可能不如深度圖精確,但它能有效強化幾何平滑性,特別是在物體邊界周圍。通過改進後的深度圖,我們的方法在多種架構和場景下顯著提升了前饋式3DGS的性能,提供了始終更優的渲染結果。我們的項目頁面:https://aim-uofa.github.io/PMLoss
English
Depth maps are widely used in feed-forward 3D Gaussian Splatting (3DGS) pipelines by unprojecting them into 3D point clouds for novel view synthesis. This approach offers advantages such as efficient training, the use of known camera poses, and accurate geometry estimation. However, depth discontinuities at object boundaries often lead to fragmented or sparse point clouds, degrading rendering quality -- a well-known limitation of depth-based representations. To tackle this issue, we introduce PM-Loss, a novel regularization loss based on a pointmap predicted by a pre-trained transformer. Although the pointmap itself may be less accurate than the depth map, it effectively enforces geometric smoothness, especially around object boundaries. With the improved depth map, our method significantly improves the feed-forward 3DGS across various architectures and scenes, delivering consistently better rendering results. Our project page: https://aim-uofa.github.io/PMLoss
PDF111June 6, 2025