ChatPaper.aiChatPaper

MMaDA-VLA:具备统一多模态指令与生成能力的大型扩散式视觉-语言-行动模型

MMaDA-VLA: Large Diffusion Vision-Language-Action Model with Unified Multi-Modal Instruction and Generation

March 26, 2026
作者: Yang Liu, Pengxiang Ding, Tengyue Jiang, Xudong Wang, Wenxuan Song, Minghui Lin, Han Zhao, Hongyin Zhang, Zifeng Zhuang, Wei Zhao, Siteng Huang, Jinkui Shi, Donglin Wang
cs.AI

摘要

视觉-语言-动作(VLA)模型旨在通过视觉观察和自然语言指令来控制机器人执行操作任务。然而,现有的分层与自回归范式常引入冗余架构,存在时序不一致性和长周期误差累积问题,且缺乏无需额外模块即可捕捉环境动态的机制。为此,我们提出MMaDA-VLA——一个完全原生预训练的大型扩散VLA模型,将多模态理解与生成统一于单一框架。其核心在于采用原生离散扩散建模方法,将语言、图像及连续机器人控制嵌入统一离散标记空间,并通过掩码标记去噪训练单一主干网络,以并行方式联合生成未来目标观测值与动作片段。迭代去噪机制实现了全局无序优化,在无需辅助世界模型的情况下,既能提升长周期任务的一致性,又能将动作生成锚定于预测的未来视觉结果。在仿真基准与真实任务中的实验表明,该方法达到业界最优性能:在LIBERO上取得98.0%平均成功率,在CALVIN上实现4.78平均任务长度。
English
Vision-Language-Action (VLA) models aim to control robots for manipulation from visual observations and natural-language instructions. However, existing hierarchical and autoregressive paradigms often introduce architectural overhead, suffer from temporal inconsistency and long-horizon error accumulation, and lack a mechanism to capture environment dynamics without extra modules. To this end, we present MMaDA-VLA, a fully native pre-trained large diffusion VLA model that unifies multi-modal understanding and generation in a single framework. Our key idea is a native discrete diffusion formulation that embeds language, images, and continuous robot controls into one discrete token space and trains a single backbone with masked token denoising to jointly generate a future goal observation and an action chunk in parallel. Iterative denoising enables global, order-free refinement, improving long-horizon consistency while grounding actions in predicted future visual outcomes without auxiliary world models. Experiments across simulation benchmarks and real-world tasks show state-of-the-art performance, achieving 98.0% average success on LIBERO and 4.78 average length on CALVIN.
PDF31April 3, 2026