ChatPaper.aiChatPaper

视觉-语言-动作模型综述:基于动作标记化的视角

A Survey on Vision-Language-Action Models: An Action Tokenization Perspective

July 2, 2025
作者: Yifan Zhong, Fengshuo Bai, Shaofei Cai, Xuchuan Huang, Zhang Chen, Xiaowei Zhang, Yuanfei Wang, Shaoyang Guo, Tianrui Guan, Ka Nam Lui, Zhiquan Qi, Yitao Liang, Yuanpei Chen, Yaodong Yang
cs.AI

摘要

视觉与语言基础模型在多模态理解、推理及生成领域的显著进展,激发了将这些智能扩展至物理世界的广泛努力,推动了视觉-语言-动作(VLA)模型的蓬勃发展。尽管方法看似多样,我们观察到当前VLA模型可统一于一个框架之下:视觉与语言输入通过一系列VLA模块处理,生成一系列动作令牌,这些令牌逐步编码更为具体且可执行的信息,最终产生可执行动作。我们进一步发现,区分VLA模型的关键设计选择在于动作令牌的构建方式,其可归类为语言描述、代码、功能、轨迹、目标状态、潜在表示、原始动作及推理。然而,关于动作令牌的全面理解仍显不足,严重阻碍了VLA模型的有效发展并模糊了未来方向。因此,本综述旨在通过动作令牌化的视角对现有VLA研究进行分类与解读,提炼各类令牌的优势与局限,并指出改进空间。通过这一系统性回顾与分析,我们为VLA模型的更广泛演进提供了综合展望,强调了尚未充分探索但极具潜力的方向,并为未来研究贡献了指导,期望推动该领域向通用智能迈进。
English
The remarkable advancements of vision and language foundation models in multimodal understanding, reasoning, and generation has sparked growing efforts to extend such intelligence to the physical world, fueling the flourishing of vision-language-action (VLA) models. Despite seemingly diverse approaches, we observe that current VLA models can be unified under a single framework: vision and language inputs are processed by a series of VLA modules, producing a chain of action tokens that progressively encode more grounded and actionable information, ultimately generating executable actions. We further determine that the primary design choice distinguishing VLA models lies in how action tokens are formulated, which can be categorized into language description, code, affordance, trajectory, goal state, latent representation, raw action, and reasoning. However, there remains a lack of comprehensive understanding regarding action tokens, significantly impeding effective VLA development and obscuring future directions. Therefore, this survey aims to categorize and interpret existing VLA research through the lens of action tokenization, distill the strengths and limitations of each token type, and identify areas for improvement. Through this systematic review and analysis, we offer a synthesized outlook on the broader evolution of VLA models, highlight underexplored yet promising directions, and contribute guidance for future research, hoping to bring the field closer to general-purpose intelligence.
PDF171July 3, 2025