掩蔽深度建模在空间感知中的应用
Masked Depth Modeling for Spatial Perception
January 25, 2026
作者: Bin Tan, Changjiang Sun, Xiage Qin, Hanat Adai, Zelin Fu, Tianxiang Zhou, Han Zhang, Yinghao Xu, Xing Zhu, Yujun Shen, Nan Xue
cs.AI
摘要
空间视觉感知是自动驾驶、机器人操控等现实应用中的基础需求,其核心在于与三维环境的交互。虽然采用RGB-D相机获取像素级对齐的度量深度是最可行的方案,但硬件限制与复杂成像条件(尤其在镜面或弱纹理表面场景下)往往形成阻碍。本研究提出,深度传感器的测量误差可视为一种"掩蔽"信号,其本质反映了潜在的几何歧义性。基于此,我们开发了LingBot-Depth深度补全模型:该模型通过掩蔽深度建模利用视觉上下文优化深度图,并集成自动化数据构建流程以实现可扩展训练。令人鼓舞的是,我们的模型在深度精度与像素覆盖率方面均优于顶级RGB-D相机。多项下游任务实验表明,LingBot-Depth能够生成跨RGB与深度模态的对齐隐式表征。我们已向空间感知研究社区开源代码、模型权重及300万组RGB-深度数据(含200万真实数据与100万模拟数据)。
English
Spatial visual perception is a fundamental requirement in physical-world applications like autonomous driving and robotic manipulation, driven by the need to interact with 3D environments. Capturing pixel-aligned metric depth using RGB-D cameras would be the most viable way, yet it usually faces obstacles posed by hardware limitations and challenging imaging conditions, especially in the presence of specular or texture-less surfaces. In this work, we argue that the inaccuracies from depth sensors can be viewed as "masked" signals that inherently reflect underlying geometric ambiguities. Building on this motivation, we present LingBot-Depth, a depth completion model which leverages visual context to refine depth maps through masked depth modeling and incorporates an automated data curation pipeline for scalable training. It is encouraging to see that our model outperforms top-tier RGB-D cameras in terms of both depth precision and pixel coverage. Experimental results on a range of downstream tasks further suggest that LingBot-Depth offers an aligned latent representation across RGB and depth modalities. We release the code, checkpoint, and 3M RGB-depth pairs (including 2M real data and 1M simulated data) to the community of spatial perception.