掩蔽深度建模在空间感知中的应用
Masked Depth Modeling for Spatial Perception
January 25, 2026
作者: Bin Tan, Changjiang Sun, Xiage Qin, Hanat Adai, Zelin Fu, Tianxiang Zhou, Han Zhang, Yinghao Xu, Xing Zhu, Yujun Shen, Nan Xue
cs.AI
摘要
空间视觉感知是自动驾驶、机器人操作等物理世界应用的基础需求,其核心在于与三维环境的交互。虽然采用RGB-D相机获取像素级对齐的度量深度是最可行的方案,但该方法常受硬件限制和复杂成像条件的制约,尤其在镜面反射或无纹理表面场景下更为突出。本研究提出将深度传感器的测量误差视为一种"掩码"信号,其本质反映了底层几何结构的不确定性。基于此洞见,我们开发了LingBot-Depth深度补全模型:该模型通过掩码深度建模机制利用视觉上下文优化深度图,并集成了自动化数据筛选流程以实现可扩展训练。令人鼓舞的是,我们的模型在深度精度和像素覆盖率方面均超越了顶级RGB-D相机的性能。在多项下游任务上的实验结果表明,LingBot-Depth能够生成跨RGB与深度模态的对齐隐式表征。我们已向空间感知研究社区开源了代码、预训练模型及300万组RGB-深度配对数据(含200万真实数据与100万模拟数据)。
English
Spatial visual perception is a fundamental requirement in physical-world applications like autonomous driving and robotic manipulation, driven by the need to interact with 3D environments. Capturing pixel-aligned metric depth using RGB-D cameras would be the most viable way, yet it usually faces obstacles posed by hardware limitations and challenging imaging conditions, especially in the presence of specular or texture-less surfaces. In this work, we argue that the inaccuracies from depth sensors can be viewed as "masked" signals that inherently reflect underlying geometric ambiguities. Building on this motivation, we present LingBot-Depth, a depth completion model which leverages visual context to refine depth maps through masked depth modeling and incorporates an automated data curation pipeline for scalable training. It is encouraging to see that our model outperforms top-tier RGB-D cameras in terms of both depth precision and pixel coverage. Experimental results on a range of downstream tasks further suggest that LingBot-Depth offers an aligned latent representation across RGB and depth modalities. We release the code, checkpoint, and 3M RGB-depth pairs (including 2M real data and 1M simulated data) to the community of spatial perception.