BagelVLA:通过交错式视觉-语言-行为生成增强长程操作能力
BagelVLA: Enhancing Long-Horizon Manipulation via Interleaved Vision-Language-Action Generation
February 10, 2026
作者: Yucheng Hu, Jianke Zhang, Yuanfei Luo, Yanjiang Guo, Xiaoyu Chen, Xinshu Sun, Kun Feng, Qingzhou Lu, Sheng Chen, Yangang Zhang, Wei Li, Jianyu Chen
cs.AI
摘要
为具身智能体配备任务推理、物理结果预测和精确动作生成的能力,是实现通用操作的关键。尽管当前视觉-语言-动作模型已利用预训练基础模型,但它们通常孤立地关注语言规划或视觉预测单一维度。这些方法很少能同时整合双重能力来指导动作生成,导致在复杂长周期操作任务中表现欠佳。为弥补这一缺陷,我们提出BagelVLA——一个在统一框架中集成语言规划、视觉预测与动作生成的融合模型。基于预训练的统一理解与生成模型初始化,BagelVLA通过训练将文本推理和视觉预测直接嵌入动作执行循环。为实现多模态高效耦合,我们提出残差流引导技术:该技术从当前观测状态初始化,利用单步去噪提取预测性视觉特征,以极低延迟指导动作生成。大量实验表明,BagelVLA在多个仿真与真实环境基准测试中显著超越现有基线模型,尤其在需要多阶段推理的任务中表现突出。
English
Equipping embodied agents with the ability to reason about tasks, foresee physical outcomes, and generate precise actions is essential for general-purpose manipulation. While recent Vision-Language-Action (VLA) models have leveraged pre-trained foundation models, they typically focus on either linguistic planning or visual forecasting in isolation. These methods rarely integrate both capabilities simultaneously to guide action generation, leading to suboptimal performance in complex, long-horizon manipulation tasks. To bridge this gap, we propose BagelVLA, a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework. Initialized from a pretrained unified understanding and generative model, BagelVLA is trained to interleave textual reasoning and visual prediction directly into the action execution loop. To efficiently couple these modalities, we introduce Residual Flow Guidance (RFG), which initializes from current observation and leverages single-step denoising to extract predictive visual features, guiding action generation with minimal latency. Extensive experiments demonstrate that BagelVLA outperforms existing baselines by a significant margin on multiple simulated and real-world benchmarks, particularly in tasks requiring multi-stage reasoning.