WMPO:基於世界模型的視覺-語言-動作模型策略優化
WMPO: World Model-based Policy Optimization for Vision-Language-Action Models
November 12, 2025
作者: Fangqi Zhu, Zhengyang Yan, Zicong Hong, Quanxin Shou, Xiao Ma, Song Guo
cs.AI
摘要
視覺-語言-動作模型在通用機器人操作領域展現出強大潛力,但其對專家示範數據的依賴限制了從失敗中學習與執行自我校正的能力。強化學習雖能透過與物理環境的自主交互實現自我改進,但在實體機器人上面臨高樣本複雜度的問題。我們提出基於世界模型的策略優化框架,這是一種無需與真實環境交互即可實現在線VLA強化學習的理論框架。與廣泛使用的潛在世界模型不同,WMPO專注於像素級預測,使「想像」軌跡與網路規模圖像預訓練的VLA特徵對齊。關鍵在於,WMPO使策略能夠執行在線GRPO,其性能優於常用的離線方法。在模擬與實機環境中的大量實驗表明,WMPO具有以下優勢:(i)顯著提升樣本效率;(ii)達成更強的整體性能;(iii)展現自我校正等湧現行為;(iv)具備穩健的泛化能力與終身學習特性。
English
Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the "imagined" trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.