ChatPaper.aiChatPaper

CogVLA:通过指令驱动路由与稀疏化实现认知对齐的视觉-语言-动作模型

CogVLA: Cognition-Aligned Vision-Language-Action Model via Instruction-Driven Routing & Sparsification

August 28, 2025
作者: Wei Li, Renshan Zhang, Rui Shao, Jie He, Liqiang Nie
cs.AI

摘要

近期基于预训练视觉-语言模型(VLMs)构建的视觉-语言-动作(VLA)模型需要大量后训练,导致高计算开销,限制了其可扩展性和部署能力。我们提出CogVLA,一种认知对齐的视觉-语言-动作框架,通过指令驱动的路由和稀疏化技术,既提升了效率又优化了性能。CogVLA借鉴了人类多模态协调机制,引入了一种三阶段渐进式架构。1)基于编码器-FiLM的聚合路由(EFA-Routing)将指令信息注入视觉编码器,有选择地聚合并压缩双流视觉标记,形成指令感知的潜在表示。2)在此紧凑视觉编码基础上,基于LLM-FiLM的剪枝路由(LFP-Routing)通过剪除与指令无关的视觉基础标记,将动作意图引入语言模型,实现标记级稀疏。3)为确保压缩后的感知输入仍能支持准确连贯的动作生成,我们引入了V-L-A耦合注意力(CAtten),它结合了因果视觉-语言注意力与双向动作并行解码。在LIBERO基准测试和真实世界机器人任务上的广泛实验表明,CogVLA以97.4%和70.0%的成功率分别达到了最先进的性能,同时相比OpenVLA,训练成本降低了2.5倍,推理延迟减少了2.8倍。CogVLA已开源,公开于https://github.com/JiuTian-VL/CogVLA。
English
Recent Vision-Language-Action (VLA) models built on pre-trained Vision-Language Models (VLMs) require extensive post-training, resulting in high computational overhead that limits scalability and deployment.We propose CogVLA, a Cognition-Aligned Vision-Language-Action framework that leverages instruction-driven routing and sparsification to improve both efficiency and performance. CogVLA draws inspiration from human multimodal coordination and introduces a 3-stage progressive architecture. 1) Encoder-FiLM based Aggregation Routing (EFA-Routing) injects instruction information into the vision encoder to selectively aggregate and compress dual-stream visual tokens, forming a instruction-aware latent representation. 2) Building upon this compact visual encoding, LLM-FiLM based Pruning Routing (LFP-Routing) introduces action intent into the language model by pruning instruction-irrelevant visually grounded tokens, thereby achieving token-level sparsity. 3) To ensure that compressed perception inputs can still support accurate and coherent action generation, we introduce V-L-A Coupled Attention (CAtten), which combines causal vision-language attention with bidirectional action parallel decoding. Extensive experiments on the LIBERO benchmark and real-world robotic tasks demonstrate that CogVLA achieves state-of-the-art performance with success rates of 97.4% and 70.0%, respectively, while reducing training costs by 2.5-fold and decreasing inference latency by 2.8-fold compared to OpenVLA. CogVLA is open-sourced and publicly available at https://github.com/JiuTian-VL/CogVLA.
PDF82August 29, 2025