ChatPaper.aiChatPaper

OxyGen:多任务并行下视觉-语言-动作模型的统一KV缓存管理方案

OxyGen: Unified KV Cache Management for Vision-Language-Action Models under Multi-Task Parallelism

March 15, 2026
作者: Xiangyu Li, Huaizhi Tang, Xin Ding, Weijun Wang, Ting Cao, Yunxin Liu
cs.AI

摘要

具身智能体日益需要在不同时间约束下,从共享观测中并行执行多项任务(如操作、对话和记忆构建)。近期混合Transformer架构的视觉-语言-动作模型在结构上支持此类异构输出,但由于计算冗余和资源竞争,现有推理系统难以实现高效的多任务并行以支持设备端部署。我们发现孤立的KV缓存管理是问题根源。为此提出统一KV缓存管理范式,将KV缓存作为跨任务、跨时序的一级共享资源。该抽象实现两大优化:跨任务KV共享消除共享观测的冗余预填充,而跨帧连续批处理将可变长度语言解码与固定频率的动作生成在控制周期内解耦。我们在最流行的混合Transformer VLA模型π_{0.5}上实现该范式,并在典型机器人配置下评估。OxyGen相比孤立执行最高可实现3.7倍加速,在保持动作质量的同时,同时实现超过200词元/秒的语言吞吐量和70赫兹的动作频率。
English
Embodied AI agents increasingly require parallel execution of multiple tasks, such as manipulation, conversation, and memory construction, from shared observations under distinct time constraints. Recent Mixture-of-Transformers (MoT) Vision-Language-Action Models (VLAs) architecturally support such heterogeneous outputs, yet existing inference systems fail to achieve efficient multi-task parallelism for on-device deployment due to redundant computation and resource contention. We identify isolated KV cache management as the root cause. To address this, we propose unified KV cache management, an inference paradigm that treats KV cache as a first-class shared resource across tasks and over time. This abstraction enables two key optimizations: cross-task KV sharing eliminates redundant prefill of shared observations, while cross-frame continuous batching decouples variable-length language decoding from fixed-rate action generation across control cycles. We implement this paradigm for π_{0.5}, the most popular MoT VLA, and evaluate under representative robotic configurations. OxyGen achieves up to 3.7times speedup over isolated execution, delivering over 200 tokens/s language throughput and 70 Hz action frequency simultaneously without action quality degradation.
PDF42March 18, 2026