ChatPaper.aiChatPaper

MG-Nav:基于稀疏空间记忆的双尺度视觉导航系统

MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory

November 27, 2025
作者: Bo Wang, Jiehong Lin, Chenzhi Liu, Xinting Hu, Yifei Yu, Tianjia Liu, Zhongrui Wang, Xiaojuan Qi
cs.AI

摘要

我们提出MG-Nav(记忆引导导航)——一种面向零样本视觉导航的双尺度框架,将全局记忆引导规划与局部几何增强控制相统一。其核心是稀疏空间记忆图(SMG),这是一种紧凑的区域中心记忆模型,每个节点聚合多视角关键帧与物体语义,在保持视点多样性的同时捕捉外观与空间结构。在全局层面,智能体基于SMG进行定位,并通过图像-实例混合检索规划目标条件节点路径,生成可达航点序列以实现长程导航引导。在局部层面,导航基础策略以点目标模式执行这些航点并实施障碍物感知控制,当从最终节点向视觉目标导航时则切换至图像目标模式。为增强视点对齐与目标识别能力,我们引入基于预训练VGGT模型的轻量级几何模块VGGT-adapter,将观测特征与目标特征对齐到共享的3D感知空间。MG-Nav以不同频率运行全局规划与局部控制,通过周期性重定位修正误差。在HM3D实例-图像-目标和MP3D图像-目标基准测试上的实验表明,MG-Nav实现了最先进的零样本性能,并在动态场景重组与未知场景条件下保持稳健性。
English
We present MG-Nav (Memory-Guided Navigation), a dual-scale framework for zero-shot visual navigation that unifies global memory-guided planning with local geometry-enhanced control. At its core is the Sparse Spatial Memory Graph (SMG), a compact, region-centric memory where each node aggregates multi-view keyframe and object semantics, capturing both appearance and spatial structure while preserving viewpoint diversity. At the global level, the agent is localized on SMG and a goal-conditioned node path is planned via an image-to-instance hybrid retrieval, producing a sequence of reachable waypoints for long-horizon guidance. At the local level, a navigation foundation policy executes these waypoints in point-goal mode with obstacle-aware control, and switches to image-goal mode when navigating from the final node towards the visual target. To further enhance viewpoint alignment and goal recognition, we introduce VGGT-adapter, a lightweight geometric module built on the pre-trained VGGT model, which aligns observation and goal features in a shared 3D-aware space. MG-Nav operates global planning and local control at different frequencies, using periodic re-localization to correct errors. Experiments on HM3D Instance-Image-Goal and MP3D Image-Goal benchmarks demonstrate that MG-Nav achieves state-of-the-art zero-shot performance and remains robust under dynamic rearrangements and unseen scene conditions.
PDF441December 4, 2025