通过策略预测强化行动策略
Reinforcing Action Policies by Prophesying
November 25, 2025
作者: Jiahui Zhang, Ze Huang, Chun Gu, Zipei Ma, Li Zhang
cs.AI
摘要
視覺-語言-動作(VLA)策略在協調語言、感知與機器人控制方面表現卓越。然而現有VLA模型大多僅通過模仿學習進行訓練,容易對演示數據過擬合,且在分佈偏移時表現脆弱。強化學習(RL)通過直接優化任務獎勵來解決此類對齊偏差,但真實機器人交互成本高昂,傳統模擬器又難以構建與遷移。我們通過學習型世界模型與專為基於流的動作頭設計的RL流程,同步提升VLA後訓練的數據效率與優化穩定性。具體而言,我們提出Prophet——一種在跨大規模異構機器人數據上預訓練的統一動作到視頻驅動框架,可學習可復用的動作-結果動力學。該框架能通過少量樣本快速適應新機器人、物體及環境,生成可直接用於推演的模擬器。基於Prophet,我們結合Flow-action-GRPO(FA-GRPO)與FlowScale強化動作策略:前者將Flow-GRPO適配至VLA動作空間,後者通過逐步重加權機制調整流動作頭的逐步梯度。Prophet、FA-GRPO與FlowScale共同構成ProphRL,為VLA後訓練提供實用且計算高效的解決路徑。實驗表明,該方法在公開基準上實現5-17%的成功率提升,在不同VLA變體的實體機器人測試中更獲得24-30%的性能增益。
English
Vision-Language-Action (VLA) policies excel in aligning language, perception, and robot control. However, most VLAs are trained purely by imitation, which overfits to demonstrations, and is brittle under distribution shift. Reinforcement learning (RL) directly optimizes task reward and thus addresses this misalignment, but real-robot interaction is expensive and conventional simulators are hard to engineer and transfer. We address both data efficiency and optimization stability in VLA post-training via a learned world model and an RL procedure tailored to flow-based action heads. Specifically, we introduce Prophet, a unified action-to-video robot actuation pretrained across large-scale, heterogeneous robot data to learn reusable action-outcome dynamics. It is able to few-shot adapt to new robots, objects, and environments, yielding a rollout-ready simulator. Upon Prophet, we reinforce action policies with Flow-action-GRPO (FA-GRPO), which adapts Flow-GRPO to operate on VLA actions, and with FlowScale, a stepwise reweighting that rescales per-step gradients in the flow head. Together, Prophet, FA-GRPO, and FlowScale constitute ProphRL, a practical, data- and compute-efficient path to VLA post-training. Experiments show 5-17% success gains on public benchmarks and 24-30% gains on real robots across different VLA variants.