ChatPaper.aiChatPaper

Mem4Nav:利用层次化空间认知长短时记忆系统提升城市环境中的视觉与语言导航能力

Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments with a Hierarchical Spatial-Cognition Long-Short Memory System

June 24, 2025
作者: Lixuan He, Haoyu Dong, Zhenxing Chen, Yangcheng Yu, Jie Feng, Yong Li
cs.AI

摘要

在大规模城市环境中的视觉与语言导航(VLN)要求具身代理能够在复杂场景中理解语言指令,并在长时间跨度内回忆相关经验。先前的模块化流程虽提供了可解释性,但缺乏统一记忆系统;而端到端的多模态大语言模型(MLLM)代理虽擅长融合视觉与语言信息,却受限于固定的上下文窗口和隐含的空间推理能力。我们提出了Mem4Nav,一种层次化的空间认知长短记忆系统,能够增强任何VLN基础架构。Mem4Nav通过稀疏八叉树实现细粒度体素索引,并结合语义拓扑图表示高层地标连接性,两者均存储于通过可逆Transformer嵌入的可训练记忆令牌中。长期记忆(LTM)在八叉树和图节点层面压缩并保留历史观测,而短期记忆(STM)则以相对坐标缓存近期的多模态输入,用于实时避障和局部规划。每一步中,STM检索大幅精简动态上下文,当需要更深层历史时,LTM令牌无损解码以重建过去的嵌入。在Touchdown和Map2Seq数据集上,针对三种基础架构(模块化、基于提示的LLM的先进VLN、以及采用跨步注意力的MLLM的先进VLN)的评估显示,Mem4Nav在任务完成率上提升了7-13个百分点,显著降低了SPD,并在nDTW上实现了超过10个百分点的改进。消融实验证实了层次化地图和双记忆模块的不可或缺性。我们的代码已通过https://github.com/tsinghua-fib-lab/Mem4Nav开源。
English
Vision-and-Language Navigation (VLN) in large-scale urban environments requires embodied agents to ground linguistic instructions in complex scenes and recall relevant experiences over extended time horizons. Prior modular pipelines offer interpretability but lack unified memory, while end-to-end (M)LLM agents excel at fusing vision and language yet remain constrained by fixed context windows and implicit spatial reasoning. We introduce Mem4Nav, a hierarchical spatial-cognition long-short memory system that can augment any VLN backbone. Mem4Nav fuses a sparse octree for fine-grained voxel indexing with a semantic topology graph for high-level landmark connectivity, storing both in trainable memory tokens embedded via a reversible Transformer. Long-term memory (LTM) compresses and retains historical observations at both octree and graph nodes, while short-term memory (STM) caches recent multimodal entries in relative coordinates for real-time obstacle avoidance and local planning. At each step, STM retrieval sharply prunes dynamic context, and, when deeper history is needed, LTM tokens are decoded losslessly to reconstruct past embeddings. Evaluated on Touchdown and Map2Seq across three backbones (modular, state-of-the-art VLN with prompt-based LLM, and state-of-the-art VLN with strided-attention MLLM), Mem4Nav yields 7-13 pp gains in Task Completion, sufficient SPD reduction, and >10 pp nDTW improvement. Ablations confirm the indispensability of both the hierarchical map and dual memory modules. Our codes are open-sourced via https://github.com/tsinghua-fib-lab/Mem4Nav.
PDF21June 25, 2025