WMPO:基于世界模型的视觉-语言-动作策略优化方法
WMPO: World Model-based Policy Optimization for Vision-Language-Action Models
November 12, 2025
作者: Fangqi Zhu, Zhengyang Yan, Zicong Hong, Quanxin Shou, Xiao Ma, Song Guo
cs.AI
摘要
视觉-语言-动作(VLA)模型在通用机器人操作任务中展现出巨大潜力,但其对专家示范数据的依赖限制了模型从失败中学习并进行自我纠错的能力。强化学习(RL)通过与物理环境的自主交互实现自我改进,但在真实机器人上面临样本复杂度高的挑战。我们提出基于世界模型的策略优化(WMPO),这是一种无需真实环境交互即可实现在线VLA强化学习的理论框架。与广泛使用的隐式世界模型不同,WMPO专注于基于像素的预测,使"想象"轨迹与通过网络规模图像预训练的VLA特征保持对齐。关键的是,WMPO使策略能够执行在线GRPO,其性能优于常用的离线策略方法。在仿真和真实机器人环境中的大量实验表明,WMPO具有以下优势:(i)显著提升样本效率;(ii)实现更强的综合性能;(iii)展现出自我纠错等涌现行为;(iv)表现出强大的泛化能力和终身学习特性。
English
Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the "imagined" trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.