CogVLA:透過指令驅動路由與稀疏化的認知對齊視覺-語言-動作模型
CogVLA: Cognition-Aligned Vision-Language-Action Model via Instruction-Driven Routing & Sparsification
August 28, 2025
作者: Wei Li, Renshan Zhang, Rui Shao, Jie He, Liqiang Nie
cs.AI
摘要
近期基於預訓練視覺語言模型(VLMs)構建的視覺-語言-動作(VLA)模型需要進行大量的後續訓練,導致計算開銷過高,限制了其可擴展性和部署能力。我們提出了CogVLA,這是一個認知對齊的視覺-語言-動作框架,利用指令驅動的路由和稀疏化來提高效率和性能。CogVLA從人類多模態協調中汲取靈感,並引入了一個三階段的漸進式架構。1)基於Encoder-FiLM的聚合路由(EFA-Routing)將指令信息注入視覺編碼器,以選擇性地聚合和壓縮雙流視覺標記,形成指令感知的潛在表示。2)在此緊湊視覺編碼的基礎上,基於LLM-FiLM的修剪路由(LFP-Routing)通過修剪與指令無關的視覺基礎標記,將動作意圖引入語言模型,從而實現標記級別的稀疏性。3)為了確保壓縮後的感知輸入仍能支持準確且連貫的動作生成,我們引入了V-L-A耦合注意力(CAtten),它結合了因果視覺-語言注意力和雙向動作並行解碼。在LIBERO基準測試和實際機器人任務上的廣泛實驗表明,CogVLA在成功率上分別達到了97.4%和70.0%,實現了最先進的性能,同時與OpenVLA相比,訓練成本減少了2.5倍,推理延遲降低了2.8倍。CogVLA已開源並公開於https://github.com/JiuTian-VL/CogVLA。
English
Recent Vision-Language-Action (VLA) models built on pre-trained
Vision-Language Models (VLMs) require extensive post-training, resulting in
high computational overhead that limits scalability and deployment.We propose
CogVLA, a Cognition-Aligned Vision-Language-Action framework that leverages
instruction-driven routing and sparsification to improve both efficiency and
performance. CogVLA draws inspiration from human multimodal coordination and
introduces a 3-stage progressive architecture. 1) Encoder-FiLM based
Aggregation Routing (EFA-Routing) injects instruction information into the
vision encoder to selectively aggregate and compress dual-stream visual tokens,
forming a instruction-aware latent representation. 2) Building upon this
compact visual encoding, LLM-FiLM based Pruning Routing (LFP-Routing)
introduces action intent into the language model by pruning
instruction-irrelevant visually grounded tokens, thereby achieving token-level
sparsity. 3) To ensure that compressed perception inputs can still support
accurate and coherent action generation, we introduce V-L-A Coupled Attention
(CAtten), which combines causal vision-language attention with bidirectional
action parallel decoding. Extensive experiments on the LIBERO benchmark and
real-world robotic tasks demonstrate that CogVLA achieves state-of-the-art
performance with success rates of 97.4% and 70.0%, respectively, while reducing
training costs by 2.5-fold and decreasing inference latency by 2.8-fold
compared to OpenVLA. CogVLA is open-sourced and publicly available at
https://github.com/JiuTian-VL/CogVLA.