LiftFeat:具备三维几何感知的局部特征匹配
LiftFeat: 3D Geometry-Aware Local Feature Matching
May 6, 2025
作者: Yepeng Liu, Wenpeng Lai, Zhou Zhao, Yuxuan Xiong, Jinchi Zhu, Jun Cheng, Yongchao Xu
cs.AI
摘要
在机器人同步定位与地图构建(SLAM)及视觉定位等应用中,稳健且高效的局部特征匹配发挥着关键作用。尽管已取得显著进展,但在光照剧烈变化、低纹理区域或重复图案的场景中,提取出既稳健又具有区分性的视觉特征仍极具挑战。本文提出了一种名为LiftFeat的新型轻量级网络,通过聚合三维几何特征来提升原始描述符的鲁棒性。具体而言,我们首先采用预训练的单目深度估计模型生成伪表面法线标签,以此监督预测表面法线方向上的三维几何特征提取。随后,我们设计了一个三维几何感知的特征提升模块,将表面法线特征与原始二维描述符特征相融合。这种三维几何特征的整合,增强了二维特征描述在极端条件下的区分能力。在相对姿态估计、单应性估计及视觉定位任务上的大量实验结果表明,我们的LiftFeat超越了一些轻量级的先进方法。代码将发布于:https://github.com/lyp-deeplearning/LiftFeat。
English
Robust and efficient local feature matching plays a crucial role in
applications such as SLAM and visual localization for robotics. Despite great
progress, it is still very challenging to extract robust and discriminative
visual features in scenarios with drastic lighting changes, low texture areas,
or repetitive patterns. In this paper, we propose a new lightweight network
called LiftFeat, which lifts the robustness of raw descriptor by
aggregating 3D geometric feature. Specifically, we first adopt a pre-trained
monocular depth estimation model to generate pseudo surface normal label,
supervising the extraction of 3D geometric feature in terms of predicted
surface normal. We then design a 3D geometry-aware feature lifting module to
fuse surface normal feature with raw 2D descriptor feature. Integrating such 3D
geometric feature enhances the discriminative ability of 2D feature description
in extreme conditions. Extensive experimental results on relative pose
estimation, homography estimation, and visual localization tasks, demonstrate
that our LiftFeat outperforms some lightweight state-of-the-art methods. Code
will be released at : https://github.com/lyp-deeplearning/LiftFeat.Summary
AI-Generated Summary