全景深度任意:全景深度估计的基础模型
Depth Any Panoramas: A Foundation Model for Panoramic Depth Estimation
December 18, 2025
作者: Xin Lin, Meixi Song, Dizhe Zhang, Wenxuan Lu, Haodong Li, Bo Du, Ming-Hsuan Yang, Truong Nguyen, Lu Qi
cs.AI
摘要
本研究提出了一种全景度量深度基础模型,该模型能够泛化至不同场景距离。我们从数据构建与框架设计双重视角探索了数据闭环范式。通过整合公共数据集、基于UE5模拟器的高质量合成数据、文本生成图像模型以及网络真实全景图像,我们构建了大规模数据集。为缩小室内/室外与合成/真实数据间的领域差异,我们引入了三阶段伪标签筛选流程,为未标注图像生成可靠真值。模型方面采用具有强预训练泛化能力的DINOv3-Large作为主干网络,并创新性地提出即插即用的距离掩码头模块、锐度中心优化和几何中心优化策略,以提升模型对多变距离的鲁棒性并强化多视角间的几何一致性。在多个基准测试(如Stanford2D3D、Matterport3D和Deep360)上的实验表明,该模型具有卓越性能与零样本泛化能力,尤其在真实场景中展现出鲁棒稳定的度量预测效果。项目页面详见:https://insta360-research-team.github.io/DAP_website/
English
In this work, we present a panoramic metric depth foundation model that generalizes across diverse scene distances. We explore a data-in-the-loop paradigm from the view of both data construction and framework design. We collect a large-scale dataset by combining public datasets, high-quality synthetic data from our UE5 simulator and text-to-image models, and real panoramic images from the web. To reduce domain gaps between indoor/outdoor and synthetic/real data, we introduce a three-stage pseudo-label curation pipeline to generate reliable ground truth for unlabeled images. For the model, we adopt DINOv3-Large as the backbone for its strong pre-trained generalization, and introduce a plug-and-play range mask head, sharpness-centric optimization, and geometry-centric optimization to improve robustness to varying distances and enforce geometric consistency across views. Experiments on multiple benchmarks (e.g., Stanford2D3D, Matterport3D, and Deep360) demonstrate strong performance and zero-shot generalization, with particularly robust and stable metric predictions in diverse real-world scenes. The project page can be found at: https://insta360-research-team.github.io/DAP_website/ {https://insta360-research-team.github.io/DAP\_website/}