ChatPaper.aiChatPaper

RynnVLA-002:統一的視覺-語言-行動與世界模型

RynnVLA-002: A Unified Vision-Language-Action and World Model

November 21, 2025
作者: Jun Cen, Siteng Huang, Yuqian Yuan, Hangjie Yuan, Chaohui Yu, Yuming Jiang, Jiayan Guo, Kehan Li, Hao Luo, Fan Wang, Xin Li, Deli Zhao, Hao Chen
cs.AI

摘要

我們推出RynnVLA-002——一個統一的視覺-語言-行動模型與世界模型。該世界模型利用行動與視覺輸入來預測未來的圖像狀態,通過學習環境的底層物理規律來優化行動生成。反之,視覺-語言-行動模型則根據圖像觀測生成後續行動,不僅強化了視覺理解能力,也為世界模型的圖像生成提供支持。RynnVLA-002的統一框架實現了環境動態與行動規劃的聯合學習。實驗表明,RynnVLA-002的性能超越單獨的視覺-語言-行動模型和世界模型,展現出兩者間的相互增強效應。我們在仿真環境與真實機器人任務中對RynnVLA-002進行評估:在未經預訓練的情況下,該模型於LIBERO仿真基準測試中達成97.4%的成功率;而在真實世界的LeRobot實驗中,其整合的世界模型更使整體成功率提升50%。
English
We introduce RynnVLA-002, a unified Vision-Language-Action (VLA) and world model. The world model leverages action and visual inputs to predict future image states, learning the underlying physics of the environment to refine action generation. Conversely, the VLA model produces subsequent actions from image observations, enhancing visual understanding and supporting the world model's image generation. The unified framework of RynnVLA-002 enables joint learning of environmental dynamics and action planning. Our experiments show that RynnVLA-002 surpasses individual VLA and world models, demonstrating their mutual enhancement. We evaluate RynnVLA-002 in both simulation and real-world robot tasks. RynnVLA-002 achieves 97.4% success rate on the LIBERO simulation benchmark without pretraining, while in real-world LeRobot experiments, its integrated world model boosts the overall success rate by 50%.
PDF242December 1, 2025