简单配方显奇效:视觉-语言-行动模型通过强化学习实现自然持续学习
Simple Recipe Works: Vision-Language-Action Models are Natural Continual Learners with Reinforcement Learning
March 12, 2026
作者: Jiaheng Hu, Jay Shim, Chen Tang, Yoonchang Sung, Bo Liu, Peter Stone, Roberto Martin-Martin
cs.AI
摘要
面向视觉-语言-动作模型的持续强化学习,是实现能在开放动态环境中自我优化的具身智能体的重要方向。传统持续学习理论认为,简单的顺序微调会导致灾难性遗忘,因此需要复杂的持续强化学习策略。本研究回归本源,通过在三类模型和五个具挑战性的终身强化学习基准测试中进行系统性实验,发现与既定认知相反:采用低秩自适应技术的简单顺序微调表现出惊人优势——具备高可塑性、几乎无遗忘现象,并保持强大的零样本泛化能力,其表现常优于复杂的持续强化学习方法。深入分析表明,这种鲁棒性源于大规模预训练模型、参数高效自适应策略和同策略强化学习的协同效应。这些要素共同重塑了稳定性与可塑性之间的平衡,使持续适应既稳定又可扩展。我们的研究确立了顺序微调作为视觉-语言-动作模型持续强化学习的有效方法,为大数据模型时代的终身学习提供了新见解。代码已发布于github.com/UT-Austin-RobIn/continual-vla-rl。
English
Continual Reinforcement Learning (CRL) for Vision-Language-Action (VLA) models is a promising direction toward self-improving embodied agents that can adapt in openended, evolving environments. However, conventional wisdom from continual learning suggests that naive Sequential Fine-Tuning (Seq. FT) leads to catastrophic forgetting, necessitating complex CRL strategies. In this work, we take a step back and conduct a systematic study of CRL for large pretrained VLAs across three models and five challenging lifelong RL benchmarks. We find that, contrary to established belief, simple Seq. FT with low-rank adaptation (LoRA) is remarkably strong: it achieves high plasticity, exhibits little to no forgetting, and retains strong zero-shot generalization, frequently outperforming more sophisticated CRL methods. Through detailed analysis, we show that this robustness arises from a synergy between the large pretrained model, parameter-efficient adaptation, and on-policy RL. Together, these components reshape the stability-plasticity trade-off, making continual adaptation both stable and scalable. Our results position Sequential Fine-Tuning as a powerful method for continual RL with VLAs and provide new insights into lifelong learning in the large model era. Code is available at github.com/UT-Austin-RobIn/continual-vla-rl.