ChatPaper.aiChatPaper

从空间到行动:基于空间基础先验的视觉-语言-动作模型接地研究

From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

October 20, 2025
作者: Zhengshen Zhang, Hao Li, Yalun Dai, Zhengbang Zhu, Lei Zhou, Chenchen Liu, Dong Wang, Francis E. H. Tay, Sijin Chen, Ziwei Liu, Yuxiao Liu, Xinghang Li, Pan Zhou
cs.AI

摘要

现有视觉-语言-动作模型虽能在三维现实世界中行动,却通常基于二维编码器构建,存在空间推理鸿沟,限制了泛化能力和适应性。近期针对VLA的三维集成技术要么需要专用传感器且跨模态迁移能力差,要么仅注入缺乏几何信息的弱线索并损害视觉-语言对齐。本文提出FALCON(从空间到行动)新范式,通过向动作头部注入丰富的三维空间标记实现突破。FALCON利用空间基础模型仅凭RGB图像即可提供强几何先验,并包含具身空间模型——该模型可选择性融合深度或位姿信息以提升可用时的保真度,且无需重新训练或改变架构。为保持语言推理能力,空间标记由空间增强动作头部处理而非简单拼接至视觉-语言主干网络。这些设计使FALCON能有效解决空间表征、模态迁移性和对齐方面的局限。在三个仿真基准与十一项现实任务的综合评估中,FALCON实现了最先进的性能表现,持续超越竞争基线,并在杂乱场景、空间提示条件约束以及物体尺度与高度变化下保持稳健性。
English
Existing vision-language-action (VLA) models act in 3D real-world but are typically built on 2D encoders, leaving a spatial reasoning gap that limits generalization and adaptability. Recent 3D integration techniques for VLAs either require specialized sensors and transfer poorly across modalities, or inject weak cues that lack geometry and degrade vision-language alignment. In this work, we introduce FALCON (From Spatial to Action), a novel paradigm that injects rich 3D spatial tokens into the action head. FALCON leverages spatial foundation models to deliver strong geometric priors from RGB alone, and includes an Embodied Spatial Model that can optionally fuse depth, or pose for higher fidelity when available, without retraining or architectural changes. To preserve language reasoning, spatial tokens are consumed by a Spatial-Enhanced Action Head rather than being concatenated into the vision-language backbone. These designs enable FALCON to address limitations in spatial representation, modality transferability, and alignment. In comprehensive evaluations across three simulation benchmarks and eleven real-world tasks, our proposed FALCON achieves state-of-the-art performance, consistently surpasses competitive baselines, and remains robust under clutter, spatial-prompt conditioning, and variations in object scale and height.
PDF261December 1, 2025