ChatPaper.aiChatPaper

Mem4Nav:通过层次化空间认知长短时记忆系统提升城市环境中的视觉语言导航能力

Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments with a Hierarchical Spatial-Cognition Long-Short Memory System

June 24, 2025
作者: Lixuan He, Haoyu Dong, Zhenxing Chen, Yangcheng Yu, Jie Feng, Yong Li
cs.AI

摘要

在大规模城市环境中,视觉与语言导航(VLN)要求具身智能体能够在复杂场景中理解语言指令,并在长时间跨度内回忆相关经验。以往的模块化流程虽提供了可解释性,却缺乏统一记忆机制;而端到端的多模态大语言模型(MLLM)智能体虽擅长融合视觉与语言信息,但仍受限于固定的上下文窗口和隐式的空间推理能力。我们提出了Mem4Nav,一种层次化的空间认知长短记忆系统,能够增强任何VLN基础模型。Mem4Nav结合了用于细粒度体素索引的稀疏八叉树与用于高层地标连通性的语义拓扑图,两者均通过可逆Transformer编码为可训练的记忆令牌进行存储。长期记忆(LTM)在八叉树和图节点上压缩并保留历史观测,而短期记忆(STM)则以相对坐标缓存近期的多模态输入,用于实时避障和局部规划。每一步中,STM检索大幅精简动态上下文,当需要更久远的历史时,LTM令牌可无损解码以重建过去的嵌入表示。在Touchdown和Map2Seq数据集上,针对三种基础模型(模块化、基于提示的LLM的先进VLN模型、以及采用跨步注意力机制的MLLM的先进VLN模型)进行评估,Mem4Nav在任务完成率上提升了7-13个百分点,显著降低了最短路径偏差(SPD),并在归一化动态时间规整(nDTW)指标上提升了超过10个百分点。消融实验证实了层次化地图和双记忆模块的不可或缺性。我们的代码已开源,详见https://github.com/tsinghua-fib-lab/Mem4Nav。
English
Vision-and-Language Navigation (VLN) in large-scale urban environments requires embodied agents to ground linguistic instructions in complex scenes and recall relevant experiences over extended time horizons. Prior modular pipelines offer interpretability but lack unified memory, while end-to-end (M)LLM agents excel at fusing vision and language yet remain constrained by fixed context windows and implicit spatial reasoning. We introduce Mem4Nav, a hierarchical spatial-cognition long-short memory system that can augment any VLN backbone. Mem4Nav fuses a sparse octree for fine-grained voxel indexing with a semantic topology graph for high-level landmark connectivity, storing both in trainable memory tokens embedded via a reversible Transformer. Long-term memory (LTM) compresses and retains historical observations at both octree and graph nodes, while short-term memory (STM) caches recent multimodal entries in relative coordinates for real-time obstacle avoidance and local planning. At each step, STM retrieval sharply prunes dynamic context, and, when deeper history is needed, LTM tokens are decoded losslessly to reconstruct past embeddings. Evaluated on Touchdown and Map2Seq across three backbones (modular, state-of-the-art VLN with prompt-based LLM, and state-of-the-art VLN with strided-attention MLLM), Mem4Nav yields 7-13 pp gains in Task Completion, sufficient SPD reduction, and >10 pp nDTW improvement. Ablations confirm the indispensability of both the hierarchical map and dual memory modules. Our codes are open-sourced via https://github.com/tsinghua-fib-lab/Mem4Nav.
PDF31June 25, 2025