空间强制:面向视觉-语言-动作模型的隐式空间表征对齐
Spatial Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model
October 14, 2025
作者: Fuhao Li, Wenxuan Song, Han Zhao, Jingbo Wang, Pengxiang Ding, Donglin Wang, Long Zeng, Haoang Li
cs.AI
摘要
视觉-语言-动作(VLA)模型近期展现出强大潜力,使机器人能够遵循语言指令并执行精确动作。然而,大多数VLA模型基于仅预训练于二维数据的视觉-语言模型构建,缺乏准确的空间感知能力,限制了其在三维物理世界中的操作性能。现有解决方案尝试引入如深度图或点云等显式三维传感器输入,但这些方法因传感器噪声、硬件异构性及现有数据集深度覆盖不全而面临挑战。另一些从二维图像估计三维线索的方法也受限于深度估计器的性能瓶颈。我们提出空间强制对齐(SF),一种简单而有效的对齐策略,无需依赖显式三维输入或深度估计器,即可隐式促使VLA模型发展空间理解能力。SF通过将VLA的中间视觉嵌入与预训练的三维基础模型生成的几何表示对齐,在中间层实施对齐,引导VLA编码更丰富的空间表示,从而提升动作精度。在仿真和真实环境中的大量实验表明,SF实现了最先进的成果,超越了基于二维和三维的VLA模型。此外,SF将训练速度最高提升至3.8倍,并在多样化的机器人任务中提高了数据效率。项目页面位于https://spatial-forcing.github.io/。
English
Vision-language-action (VLA) models have recently shown strong potential in
enabling robots to follow language instructions and execute precise actions.
However, most VLAs are built upon vision-language models pretrained solely on
2D data, which lack accurate spatial awareness and hinder their ability to
operate in the 3D physical world. Existing solutions attempt to incorporate
explicit 3D sensor inputs such as depth maps or point clouds, but these
approaches face challenges due to sensor noise, hardware heterogeneity, and
incomplete depth coverage in existing datasets. Alternative methods that
estimate 3D cues from 2D images also suffer from the limited performance of
depth estimators.We propose Spatial Forcing (SF), a simple yet effective
alignment strategy that implicitly forces VLA models to develop spatial
comprehension capabilities without relying on explicit 3D inputs or depth
estimators. SF aligns intermediate visual embeddings of VLAs with geometric
representations produced by pretrained 3D foundation models. By enforcing
alignment at intermediate layers, SF guides VLAs to encode richer spatial
representations that enhance action precision.Extensive experiments in
simulation and real-world environments demonstrate that SF achieves
state-of-the-art results, surpassing both 2D- and 3D-based VLAs. SF further
accelerates training by up to 3.8x and improves data efficiency across diverse
robotic tasks. Project page is at https://spatial-forcing.github.io/