ChatPaper.aiChatPaper

双流扩散模型在世界模型增强的视觉-语言-动作框架中的应用

Dual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model

October 31, 2025
作者: John Won, Kyungmin Lee, Huiwon Jang, Dongyoung Kim, Jinwoo Shin
cs.AI

摘要

近期,通过引入世界模型增强视觉-语言-动作模型(VLA)在机器人策略学习领域展现出巨大潜力。然而,由于观测状态与动作序列两种模态间的本质差异,联合预测下一状态观测结果与动作序列仍面临挑战。为此,我们提出双流扩散框架(DUST),该框架通过世界模型增强的VLA架构解决模态冲突问题,并提升模型在多样化任务中的性能。具体而言,我们设计了一种多模态扩散Transformer架构,在保持独立模态流的同时实现跨模态知识共享。此外,我们引入针对各模态的独立噪声扰动机制和解耦流匹配损失函数。该设计使模型能够以双向方式学习联合分布,同时避免构建统一潜在空间的需求。基于训练过程中的模态解耦特性,我们还提出支持测试时缩放的分层联合采样方法,使动作与视觉标记能以不同速率异步演化。在RoboCasa和GR-1等仿真基准测试中,DUST相较基线方法最高可获得6%的性能提升,而测试时缩放策略可额外带来2-5%的增益。在Franka Research 3机器人实体实验中,DUST将任务成功率提高13%,证实其超越仿真环境的有效性。此外,基于BridgeV2无动作视频数据的预训练在RoboCasa任务中产生显著迁移增益,凸显DUST在大规模VLA预训练中的应用潜力。
English
Recently, augmenting Vision-Language-Action models (VLAs) with world modeling has shown promise in improving robotic policy learning. However, it remains challenging to jointly predict next-state observations and action sequences because of the inherent difference between the two modalities. To address this, we propose DUal-STream diffusion (DUST), a world-model augmented VLA framework that handles the modality conflict and enhances the performance of VLAs across diverse tasks. Specifically, we propose a multimodal diffusion transformer architecture that explicitly maintains separate modality streams while still enabling cross-modal knowledge sharing. In addition, we introduce independent noise perturbations for each modality and a decoupled flow-matching loss. This design enables the model to learn the joint distribution in a bidirectional manner while avoiding the need for a unified latent space. Based on the decoupling of modalities during training, we also introduce a joint sampling method that supports test-time scaling, where action and vision tokens evolve asynchronously at different rates. Through experiments on simulated benchmarks such as RoboCasa and GR-1, DUST achieves up to 6% gains over baseline methods, while our test-time scaling approach provides an additional 2-5% boost. On real-world tasks with the Franka Research 3, DUST improves success rates by 13%, confirming its effectiveness beyond simulation. Furthermore, pre-training on action-free videos from BridgeV2 yields significant transfer gains on RoboCasa, underscoring DUST's potential for large-scale VLA pretraining.
PDF101February 7, 2026