ChatPaper.aiChatPaper

VLA-R1:提升視覺-語言-行動模型中的推理能力

VLA-R1: Enhancing Reasoning in Vision-Language-Action Models

October 2, 2025
作者: Angen Ye, Zeyu Zhang, Boyuan Wang, Xiaofeng Wang, Dapeng Zhang, Zheng Zhu
cs.AI

摘要

視覺-語言-行動(VLA)模型旨在統一感知、語言理解與行動生成,提供強大的跨任務與跨場景泛化能力,對具身人工智慧具有廣泛影響。然而,現有的VLA模型往往缺乏明確的逐步推理,而是直接輸出最終行動,未考慮可操作性約束或幾何關係。其訓練後流程也鮮少強化推理質量,主要依賴於監督微調與弱獎勵設計。為應對這些挑戰,我們提出了VLA-R1,這是一種推理增強的VLA模型,它將基於可驗證獎勵的強化學習(RLVR)與群體相對策略優化(GRPO)相結合,系統性地優化推理與執行。具體而言,我們設計了一種基於RLVR的訓練後策略,針對區域對齊、軌跡一致性與輸出格式提供可驗證獎勵,從而增強推理的魯棒性與執行的準確性。此外,我們開發了VLA-CoT-13K,這是一個高質量數據集,提供了與可操作性及軌跡註釋明確對齊的思維鏈監督。進一步地,在域內、域外、模擬及真實機器人平台上的廣泛評估表明,與先前的VLA方法相比,VLA-R1實現了更優的泛化能力與現實世界性能。我們計劃在本文發表後公開模型、代碼及數據集。代碼:https://github.com/GigaAI-research/VLA-R1。網站:https://gigaai-research.github.io/VLA-R1。
English
Vision-Language-Action (VLA) models aim to unify perception, language understanding, and action generation, offering strong cross-task and cross-scene generalization with broad impact on embodied AI. However, current VLA models often lack explicit step-by-step reasoning, instead emitting final actions without considering affordance constraints or geometric relations. Their post-training pipelines also rarely reinforce reasoning quality, relying primarily on supervised fine-tuning with weak reward design. To address these challenges, we present VLA-R1, a reasoning-enhanced VLA that integrates Reinforcement Learning from Verifiable Rewards (RLVR) with Group Relative Policy Optimization (GRPO) to systematically optimize both reasoning and execution. Specifically, we design an RLVR-based post-training strategy with verifiable rewards for region alignment, trajectory consistency, and output formatting, thereby strengthening reasoning robustness and execution accuracy. Moreover, we develop VLA-CoT-13K, a high-quality dataset that provides chain-of-thought supervision explicitly aligned with affordance and trajectory annotations. Furthermore, extensive evaluations on in-domain, out-of-domain, simulation, and real-robot platforms demonstrate that VLA-R1 achieves superior generalization and real-world performance compared to prior VLA methods. We plan to release the model, code, and dataset following the publication of this work. Code: https://github.com/GigaAI-research/VLA-R1. Website: https://gigaai-research.github.io/VLA-R1.
PDF72October 3, 2025