視覺-語言-動作模型調查:動作標記化視角
A Survey on Vision-Language-Action Models: An Action Tokenization Perspective
July 2, 2025
作者: Yifan Zhong, Fengshuo Bai, Shaofei Cai, Xuchuan Huang, Zhang Chen, Xiaowei Zhang, Yuanfei Wang, Shaoyang Guo, Tianrui Guan, Ka Nam Lui, Zhiquan Qi, Yitao Liang, Yuanpei Chen, Yaodong Yang
cs.AI
摘要
視覺與語言基礎模型在多模態理解、推理及生成方面的顯著進展,激發了將此類智能擴展至物理世界的日益增長的努力,從而推動了視覺-語言-行動(VLA)模型的蓬勃發展。儘管方法看似多樣,我們觀察到現有的VLA模型可統一於一個框架之下:視覺與語言輸入通過一系列VLA模塊處理,生成一系列逐步編碼更為具體且可操作信息的行動令牌,最終產生可執行的行動。我們進一步確定,區分VLA模型的關鍵設計選擇在於行動令牌的構建方式,其可分為語言描述、代碼、功能可供性、軌跡、目標狀態、潛在表示、原始行動及推理。然而,對於行動令牌仍缺乏全面理解,這嚴重阻礙了VLA模型的有效發展並模糊了未來方向。因此,本調查旨在通過行動令牌化的視角對現有VLA研究進行分類與解讀,提煉各類令牌的優勢與局限,並指出改進領域。通過此系統性回顧與分析,我們對VLA模型的更廣泛演變提供了一個綜合展望,強調了尚未充分探索但前景廣闊的方向,並為未來研究貢獻了指導,期望能推動該領域向通用智能更進一步。
English
The remarkable advancements of vision and language foundation models in
multimodal understanding, reasoning, and generation has sparked growing efforts
to extend such intelligence to the physical world, fueling the flourishing of
vision-language-action (VLA) models. Despite seemingly diverse approaches, we
observe that current VLA models can be unified under a single framework: vision
and language inputs are processed by a series of VLA modules, producing a chain
of action tokens that progressively encode more grounded and
actionable information, ultimately generating executable actions. We further
determine that the primary design choice distinguishing VLA models lies in how
action tokens are formulated, which can be categorized into language
description, code, affordance, trajectory, goal state, latent representation,
raw action, and reasoning. However, there remains a lack of comprehensive
understanding regarding action tokens, significantly impeding effective VLA
development and obscuring future directions. Therefore, this survey aims to
categorize and interpret existing VLA research through the lens of action
tokenization, distill the strengths and limitations of each token type, and
identify areas for improvement. Through this systematic review and analysis, we
offer a synthesized outlook on the broader evolution of VLA models, highlight
underexplored yet promising directions, and contribute guidance for future
research, hoping to bring the field closer to general-purpose intelligence.