villa-X:提升視覺-語言-動作模型中的潛在動作建模能力
villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
July 31, 2025
作者: Xiaoyu Chen, Hangxing Wei, Pushi Zhang, Chuheng Zhang, Kaixin Wang, Yanjiang Guo, Rushuai Yang, Yucen Wang, Xinquan Xiao, Li Zhao, Jianyu Chen, Jiang Bian
cs.AI
摘要
視覺-語言-動作(VLA)模型已成為一種流行的範式,用於學習能夠遵循語言指令並泛化到新場景的機器人操作策略。最近的研究開始探索將潛在動作(即兩幀之間視覺變化的抽象表示)融入VLA預訓練中。在本文中,我們介紹了villa-X,這是一種新穎的視覺-語言-潛在動作(ViLLA)框架,該框架在潛在動作建模方面取得了進展,以學習可泛化的機器人操作策略。我們的方法改進了潛在動作的學習方式以及它們如何被整合到VLA預訓練中。這些貢獻共同使villa-X在包括SIMPLER和LIBERO在內的模擬環境中,以及在包括夾爪和靈巧手操作的兩個真實世界機器人設置中,均取得了優異的性能。我們相信ViLLA範式具有顯著的潛力,並且我們的villa-X為未來的研究提供了堅實的基礎。
English
Visual-Language-Action (VLA) models have emerged as a popular paradigm for
learning robot manipulation policies that can follow language instructions and
generalize to novel scenarios. Recent work has begun to explore the
incorporation of latent actions, an abstract representation of visual change
between two frames, into VLA pre-training. In this paper, we introduce villa-X,
a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent
action modeling for learning generalizable robot manipulation policies. Our
approach improves both how latent actions are learned and how they are
incorporated into VLA pre-training. Together, these contributions enable
villa-X to achieve superior performance across simulated environments including
SIMPLER and LIBERO, as well as on two real-world robot setups including gripper
and dexterous hand manipulation. We believe the ViLLA paradigm holds
significant promise, and that our villa-X provides a strong foundation for
future research.