ChatPaper.aiChatPaper

统一扩散VLA:通过联合离散去噪扩散过程实现的视觉-语言-动作模型

Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process

November 3, 2025
作者: Jiayi Chen, Wenxuan Song, Pengxiang Ding, Ziyang Zhou, Han Zhao, Feilong Tang, Donglin Wang, Haoang Li
cs.AI

摘要

视觉-语言-动作模型旨在理解自然语言指令与视觉观察信息,并作为具身智能体执行相应动作。近期研究将未来图像预测纳入理解-行动循环,形成了能联合理解、生成与行动的统一化VLA模型——既可解读文本与图像,又能生成未来图像与动作。然而现有模型要么依赖外部专家实现模态统一,要么将图像生成与动作预测视为独立过程,限制了任务间直接协同的效益。我们的核心思想是通过同步去噪过程联合优化生成与动作,在持续充分的视觉引导下,利用迭代优化使动作从初始化状态逐步演进。基于此理念,我们提出统一扩散VLA模型及联合离散去噪扩散过程(JD3P),该扩散过程将多模态整合至单一去噪轨迹,作为实现理解、生成与行动本质协同的关键机制。我们的模型与理论建立在全模态统一标记空间和混合注意力机制之上,进一步提出两阶段训练流程及多项推理优化技术以提升性能与效率。本方法在CALVIN、LIBERO和SimplerEnv等基准测试中达到最优性能,推理速度比自回归方法提升4倍,并通过深度分析与现实场景验证了其有效性。项目页面详见https://irpn-eai.github.io/UD-VLA.github.io/。
English
Vision-language-action (VLA) models aim to understand natural language instructions and visual observations and to execute corresponding actions as an embodied agent. Recent work integrates future images into the understanding-acting loop, yielding unified VLAs that jointly understand, generate, and act -- reading text and images and producing future images and actions. However, these models either rely on external experts for modality unification or treat image generation and action prediction as separate processes, limiting the benefits of direct synergy between these tasks. Our core philosophy is to optimize generation and action jointly through a synchronous denoising process, where the iterative refinement enables actions to evolve from initialization, under constant and sufficient visual guidance. We ground this philosophy in our proposed Unified Diffusion VLA and Joint Discrete Denoising Diffusion Process (JD3P), which is a joint diffusion process that integrates multiple modalities into a single denoising trajectory to serve as the key mechanism enabling understanding, generation, and acting to be intrinsically synergistic. Our model and theory are built on a unified tokenized space of all modalities and a hybrid attention mechanism. We further propose a two-stage training pipeline and several inference-time techniques that optimize performance and efficiency. Our approach achieves state-of-the-art performance on benchmarks such as CALVIN, LIBERO, and SimplerEnv with 4times faster inference than autoregressive methods, and we demonstrate its effectiveness through in-depth analysis and real-world evaluations. Our project page is available at https://irpn-eai.github.io/UD-VLA.github.io/.
PDF61January 19, 2026