BitVLA:面向机器人操作的1比特视觉-语言-动作模型
BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
June 9, 2025
作者: Hongyu Wang, Chuyan Xiong, Ruiping Wang, Xilin Chen
cs.AI
摘要
视觉-语言-动作(VLA)模型在广泛的机器人操作任务中展现了卓越的能力。然而,其日益增大的模型规模对资源受限的机器人系统部署构成了显著挑战。尽管1比特预训练已被证明能有效提升大型语言模型的推理效率且性能损失最小,但其在VLA模型中的应用仍待深入探索。本研究提出了BitVLA,首个专为机器人操作设计的1比特VLA模型,其中每个参数均为三元值,即{-1, 0, 1}。为了进一步缩减视觉编码器的内存占用,我们引入了蒸馏感知训练策略,将全精度编码器压缩至1.58比特权重。在此过程中,全精度编码器作为教师模型,以更好地对齐潜在表示。尽管缺乏大规模机器人预训练,BitVLA在LIBERO基准测试中与采用4比特后训练量化的顶尖模型OpenVLA-OFT表现相当,同时仅消耗29.8%的内存。这些成果凸显了BitVLA在内存受限的边缘设备上部署的潜力。我们已在https://github.com/ustcwhy/BitVLA发布了代码与模型权重。
English
Vision-Language-Action (VLA) models have shown impressive capabilities across
a wide range of robotics manipulation tasks. However, their growing model size
poses significant challenges for deployment on resource-constrained robotic
systems. While 1-bit pretraining has proven effective for enhancing the
inference efficiency of large language models with minimal performance loss,
its application to VLA models remains underexplored. In this work, we present
BitVLA, the first 1-bit VLA model for robotics manipulation, in which every
parameter is ternary, i.e., {-1, 0, 1}. To further reduce the memory footprint
of the vision encoder, we propose the distillation-aware training strategy that
compresses the full-precision encoder to 1.58-bit weights. During this process,
a full-precision encoder serves as a teacher model to better align latent
representations. Despite the lack of large-scale robotics pretraining, BitVLA
achieves performance comparable to the state-of-the-art model OpenVLA-OFT with
4-bit post-training quantization on the LIBERO benchmark, while consuming only
29.8% of the memory. These results highlight BitVLA's promise for deployment on
memory-constrained edge devices. We release the code and model weights in
https://github.com/ustcwhy/BitVLA.