DepthFM:使用流匹配实现快速单目深度估计
DepthFM: Fast Monocular Depth Estimation with Flow Matching
March 20, 2024
作者: Ming Gui, Johannes S. Fischer, Ulrich Prestel, Pingchuan Ma, Dmytro Kotovenko, Olga Grebenkova, Stefan Andreas Baumann, Vincent Tao Hu, Björn Ommer
cs.AI
摘要
单目深度估计对许多下游视觉任务和应用至关重要。目前针对这一问题的现有判别方法存在模糊伪影的局限,而最先进的生成方法由于其随机微分方程(SDE)性质导致采样速度缓慢。我们不是从噪声开始,而是寻求从输入图像到深度图的直接映射。我们观察到,这可以通过流匹配有效地构建,因为其直线轨迹在解空间中提供了效率和高质量。我们的研究表明,预训练的图像扩散模型可以作为流匹配深度模型的充分先验,使得仅在合成数据上进行高效训练即可泛化到真实图像。我们发现,辅助表面法线损失进一步改善了深度估计。由于我们方法的生成性质,我们的模型可可靠地预测其深度估计的置信度。在复杂自然场景的标准基准测试中,尽管只在少量合成数据上进行训练,我们的轻量级方法以有利的低计算成本展现出最先进的性能。
English
Monocular depth estimation is crucial for numerous downstream vision tasks
and applications. Current discriminative approaches to this problem are limited
due to blurry artifacts, while state-of-the-art generative methods suffer from
slow sampling due to their SDE nature. Rather than starting from noise, we seek
a direct mapping from input image to depth map. We observe that this can be
effectively framed using flow matching, since its straight trajectories through
solution space offer efficiency and high quality. Our study demonstrates that a
pre-trained image diffusion model can serve as an adequate prior for a flow
matching depth model, allowing efficient training on only synthetic data to
generalize to real images. We find that an auxiliary surface normals loss
further improves the depth estimates. Due to the generative nature of our
approach, our model reliably predicts the confidence of its depth estimates. On
standard benchmarks of complex natural scenes, our lightweight approach
exhibits state-of-the-art performance at favorable low computational cost
despite only being trained on little synthetic data.Summary
AI-Generated Summary