ChatPaper.aiChatPaper

MMaDA-VLA:具备统一多模态指令与生成能力的大型扩散视觉-语言-行动模型

MMaDA-VLA: Large Diffusion Vision-Language-Action Model with Unified Multi-Modal Instruction and Generation

March 26, 2026
作者: Yang Liu, Pengxiang Ding, Tengyue Jiang, Xudong Wang, Wenxuan Song, Minghui Lin, Han Zhao, Hongyin Zhang, Zifeng Zhuang, Wei Zhao, Siteng Huang, Jinkui Shi, Donglin Wang
cs.AI

摘要

视觉-语言-动作(VLA)模型旨在通过视觉观察和自然语言指令来控制机器人执行操作任务。然而,现有的分层式与自回归范式常引入冗余架构,存在时序不一致性和长周期误差累积问题,且缺乏无需额外模块即可捕捉环境动态的机制。为此,我们提出MMaDA-VLA——一个完全原生的预训练扩散大模型,将多模态理解与生成统一在单一框架中。其核心创新在于原生离散扩散框架:将语言、图像和连续机器人控制嵌入统一离散标记空间,通过掩码标记去噪训练单一主干网络,并行生成未来目标观测值及动作序列。迭代去噪机制实现了全局无序优化,在无需辅助世界模型的情况下,既能提升长周期任务的一致性,又能通过预测的未来视觉结果夯实动作生成基础。在仿真基准测试和真实任务中的实验表明,该方法达到业界最优性能:在LIBERO上实现98.0%的平均成功率,在CALVIN上获得4.78的平均任务长度。
English
Vision-Language-Action (VLA) models aim to control robots for manipulation from visual observations and natural-language instructions. However, existing hierarchical and autoregressive paradigms often introduce architectural overhead, suffer from temporal inconsistency and long-horizon error accumulation, and lack a mechanism to capture environment dynamics without extra modules. To this end, we present MMaDA-VLA, a fully native pre-trained large diffusion VLA model that unifies multi-modal understanding and generation in a single framework. Our key idea is a native discrete diffusion formulation that embeds language, images, and continuous robot controls into one discrete token space and trains a single backbone with masked token denoising to jointly generate a future goal observation and an action chunk in parallel. Iterative denoising enables global, order-free refinement, improving long-horizon consistency while grounding actions in predicted future visual outcomes without auxiliary world models. Experiments across simulation benchmarks and real-world tasks show state-of-the-art performance, achieving 98.0% average success on LIBERO and 4.78 average length on CALVIN.
PDF31April 3, 2026