统一扩散VLA:通过联合离散去噪扩散过程实现的视觉-语言-动作模型
Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process
November 3, 2025
作者: Jiayi Chen, Wenxuan Song, Pengxiang Ding, Ziyang Zhou, Han Zhao, Feilong Tang, Donglin Wang, Haoang Li
cs.AI
摘要
視覺-語言-行動(VLA)模型旨在理解自然語言指令與視覺觀測數據,並作為具身智能體執行相應動作。近期研究將未來圖像整合至理解-行動循環中,催生出能聯合理解、生成與行動的統一VLA模型——既能解讀文本與圖像,又能生成未來圖像與動作。然而,現有模型要么依賴外部專家實現模態統一,要么將圖像生成與動作預測視為獨立過程,限制了這些任務間直接協同的效益。我們的核心理念是通過同步去噪過程聯合優化生成與動作,該迭代優化機制使動作能在持續充分的視覺引導下從初始化狀態逐步演進。我們將此理念實現在提出的統一擴散VLA模型與聯合離散去噪擴散過程(JD3P)中,該聯合擴散過程將多模態整合至單一去噪軌跡,作為實現理解、生成與行動內在協同的關鍵機制。我們的模型與理論建構於統一的多模態標記空間與混合注意力機制之上,並進一步提出兩階段訓練流程及多項推理階段優化技術以提升效能與效率。該方法在CALVIN、LIBERO和SimplerEnv等基準測試中達到最先進性能,推理速度比自回歸方法快4倍,我們通過深度分析與實境評估驗證其有效性。項目頁面請見:https://irpn-eai.github.io/UD-VLA.github.io/。
English
Vision-language-action (VLA) models aim to understand natural language
instructions and visual observations and to execute corresponding actions as an
embodied agent. Recent work integrates future images into the
understanding-acting loop, yielding unified VLAs that jointly understand,
generate, and act -- reading text and images and producing future images and
actions. However, these models either rely on external experts for modality
unification or treat image generation and action prediction as separate
processes, limiting the benefits of direct synergy between these tasks. Our
core philosophy is to optimize generation and action jointly through a
synchronous denoising process, where the iterative refinement enables actions
to evolve from initialization, under constant and sufficient visual guidance.
We ground this philosophy in our proposed Unified Diffusion VLA and Joint
Discrete Denoising Diffusion Process (JD3P), which is a joint diffusion process
that integrates multiple modalities into a single denoising trajectory to serve
as the key mechanism enabling understanding, generation, and acting to be
intrinsically synergistic. Our model and theory are built on a unified
tokenized space of all modalities and a hybrid attention mechanism. We further
propose a two-stage training pipeline and several inference-time techniques
that optimize performance and efficiency. Our approach achieves
state-of-the-art performance on benchmarks such as CALVIN, LIBERO, and
SimplerEnv with 4times faster inference than autoregressive methods, and we
demonstrate its effectiveness through in-depth analysis and real-world
evaluations. Our project page is available at
https://irpn-eai.github.io/UD-VLA.github.io/.