WorldVLA:迈向自回归动作世界模型
WorldVLA: Towards Autoregressive Action World Model
June 26, 2025
作者: Jun Cen, Chaohui Yu, Hangjie Yuan, Yuming Jiang, Siteng Huang, Jiayan Guo, Xin Li, Yibing Song, Hao Luo, Fan Wang, Deli Zhao, Hao Chen
cs.AI
摘要
我們提出了WorldVLA,這是一個自迴歸動作世界模型,它統一了動作與圖像的理解與生成。我們的WorldVLA將視覺-語言-動作(VLA)模型與世界模型整合於單一框架之中。該世界模型通過利用動作與圖像的理解來預測未來圖像,旨在學習環境的基礎物理規律以提升動作生成。同時,動作模型基於圖像觀測生成後續動作,有助於視覺理解,並反過來促進世界模型的視覺生成。我們證明了WorldVLA在性能上超越了獨立的動作模型與世界模型,凸顯了世界模型與動作模型之間的相互增強作用。此外,我們發現當以自迴歸方式生成動作序列時,動作模型的性能會有所下降。這一現象可歸因於模型在動作預測上的泛化能力有限,導致早期動作的錯誤傳播至後續動作。為解決此問題,我們提出了一種注意力掩碼策略,在生成當前動作時選擇性地掩蓋先前的動作,這在動作塊生成任務中顯示出顯著的性能提升。
English
We present WorldVLA, an autoregressive action world model that unifies action
and image understanding and generation. Our WorldVLA intergrates
Vision-Language-Action (VLA) model and world model in one single framework. The
world model predicts future images by leveraging both action and image
understanding, with the purpose of learning the underlying physics of the
environment to improve action generation. Meanwhile, the action model generates
the subsequent actions based on image observations, aiding in visual
understanding and in turn helps visual generation of the world model. We
demonstrate that WorldVLA outperforms standalone action and world models,
highlighting the mutual enhancement between the world model and the action
model. In addition, we find that the performance of the action model
deteriorates when generating sequences of actions in an autoregressive manner.
This phenomenon can be attributed to the model's limited generalization
capability for action prediction, leading to the propagation of errors from
earlier actions to subsequent ones. To address this issue, we propose an
attention mask strategy that selectively masks prior actions during the
generation of the current action, which shows significant performance
improvement in the action chunk generation task.