GaMO:基於幾何感知的多視角擴散外繪技術實現稀疏視角三維重建
GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction
December 31, 2025
作者: Yi-Chuan Huang, Hao-Jen Chien, Chin-Yang Lin, Ying-Huan Chen, Yu-Lun Liu
cs.AI
摘要
近期三維重建技術雖在密集多視角影像的高品質場景採集方面取得顯著進展,但在輸入視角有限的場景中仍面臨挑戰。為解決此問題,學界已實施多種方法,包括正則化技術、語義先驗和幾何約束。最新的基於擴散模型的方法通過從新相機位姿生成新視角來擴充訓練數據,已展現出顯著改進,超越了早期的正則化與基於先驗的技術。儘管取得這些進展,我們發現當前最先進方法存在三個關鍵局限:已知視角周邊的覆蓋範圍不足、生成視角間的幾何不一致性,以及計算密集的處理流程。為此,我們提出GaMO(幾何感知多視角外繪框架),該框架通過多視角外繪技術重新定義稀疏視角重建。與生成新視點不同,GaMO從現有相機位姿擴展視野範圍,這種方式本質上保持幾何一致性的同時提供更廣闊的場景覆蓋。我們的方法採用零樣本方式實現多視角條件化與幾何感知去噪策略,無需額外訓練。在Replica和ScanNet++數據集上的大量實驗表明,該方法在3、6、9個輸入視角下均實現最先進的重建質量,在PSNR和LPIPS指標上超越先前方法,並以低於10分鐘的處理時間實現比當前最先進擴散方法25倍的加速效果。項目頁面:https://yichuanh.github.io/GaMO/
English
Recent advances in 3D reconstruction have achieved remarkable progress in high-quality scene capture from dense multi-view imagery, yet struggle when input views are limited. Various approaches, including regularization techniques, semantic priors, and geometric constraints, have been implemented to address this challenge. Latest diffusion-based methods have demonstrated substantial improvements by generating novel views from new camera poses to augment training data, surpassing earlier regularization and prior-based techniques. Despite this progress, we identify three critical limitations in these state-of-the-art approaches: inadequate coverage beyond known view peripheries, geometric inconsistencies across generated views, and computationally expensive pipelines. We introduce GaMO (Geometry-aware Multi-view Outpainter), a framework that reformulates sparse-view reconstruction through multi-view outpainting. Instead of generating new viewpoints, GaMO expands the field of view from existing camera poses, which inherently preserves geometric consistency while providing broader scene coverage. Our approach employs multi-view conditioning and geometry-aware denoising strategies in a zero-shot manner without training. Extensive experiments on Replica and ScanNet++ demonstrate state-of-the-art reconstruction quality across 3, 6, and 9 input views, outperforming prior methods in PSNR and LPIPS, while achieving a 25times speedup over SOTA diffusion-based methods with processing time under 10 minutes. Project page: https://yichuanh.github.io/GaMO/