ChatPaper.aiChatPaper

世界行动模型即零样本策略

World Action Models are Zero-shot Policies

February 17, 2026
作者: Seonghyeon Ye, Yunhao Ge, Kaiyuan Zheng, Shenyuan Gao, Sihyun Yu, George Kurian, Suneel Indupuru, You Liang Tan, Chuning Zhu, Jiannan Xiang, Ayaan Malik, Kyungmin Lee, William Liang, Nadun Ranawaka, Jiasheng Gu, Yinzhen Xu, Guanzhi Wang, Fengyuan Hu, Avnish Narayan, Johan Bjorck, Jing Wang, Gwanghyun Kim, Dantong Niu, Ruijie Zheng, Yuqi Xie, Jimmy Wu, Qi Wang, Ryan Julian, Danfei Xu, Yilun Du, Yevgen Chebotar, Scott Reed, Jan Kautz, Yuke Zhu, Linxi "Jim" Fan, Joel Jang
cs.AI

摘要

当前最先进的视觉-语言-动作(VLA)模型虽在语义泛化方面表现出色,却难以在陌生环境中泛化至未见的物理运动。我们提出DreamZero——一种基于预训练视频扩散主干网络构建的世界行动模型(WAM)。与VLA不同,WAM通过预测未来世界状态与动作来学习物理动态,将视频作为世界演变过程的密集表征。通过联合建模视频与动作,DreamZero能够从异构机器人数据中高效学习多样化技能,无需依赖重复演示。真实机器人实验表明,该模型在新任务和新环境中的泛化能力较顶尖VLA提升逾两倍。关键突破在于,通过模型与系统优化,我们实现了140亿参数自回归视频扩散模型的7Hz实时闭环控制。最后,我们展示了两种跨具身迁移形式:仅使用其他机器人或人类的视频演示,仅需10-20分钟数据就能在未知任务上获得超过42%的相对性能提升;更令人惊讶的是,DreamZero实现了小样本具身适应——仅需30分钟操作数据即可迁移至新具身形态,同时保持零样本泛化能力。
English
State-of-the-art Vision-Language-Action (VLA) models excel at semantic generalization but struggle to generalize to unseen physical motions in novel environments. We introduce DreamZero, a World Action Model (WAM) built upon a pretrained video diffusion backbone. Unlike VLAs, WAMs learn physical dynamics by predicting future world states and actions, using video as a dense representation of how the world evolves. By jointly modeling video and action, DreamZero learns diverse skills effectively from heterogeneous robot data without relying on repetitive demonstrations. This results in over 2x improvement in generalization to new tasks and environments compared to state-of-the-art VLAs in real robot experiments. Crucially, through model and system optimizations, we enable a 14B autoregressive video diffusion model to perform real-time closed-loop control at 7Hz. Finally, we demonstrate two forms of cross-embodiment transfer: video-only demonstrations from other robots or humans yield a relative improvement of over 42% on unseen task performance with just 10-20 minutes of data. More surprisingly, DreamZero enables few-shot embodiment adaptation, transferring to a new embodiment with only 30 minutes of play data while retaining zero-shot generalization.
PDF151March 28, 2026