ChatPaper.aiChatPaper

在语言模型中释放持续学习能力

Unlocking Continual Learning Abilities in Language Models

June 25, 2024
作者: Wenyu Du, Shuang Cheng, Tongxu Luo, Zihan Qiu, Zeyu Huang, Ka Chun Cheung, Reynold Cheng, Jie Fu
cs.AI

摘要

语言模型(LMs)展现出令人印象深刻的性能和泛化能力。然而,LMs在持续学习(CL)中面临灾难性遗忘的挑战,这削弱了它们的长期可持续性。现有方法通常通过将旧任务数据或任务相关的归纳偏差纳入LMs来解决这一问题。然而,旧数据和准确的任务信息通常难以获取或成本高昂,这阻碍了当前CL方法对LMs的可用性。为了解决这一局限性,我们引入了MIGU(基于幅度的梯度更新用于持续学习),这是一种无需复习和无需任务标签的方法,仅通过更新LMs线性层中输出幅度较大的模型参数。MIGU基于我们的观察,即LMs线性层输出的L1归一化幅度分布在LM处理不同任务数据时是不同的。通过在梯度更新过程中施加这一简单约束,我们可以利用LMs的固有行为,从而释放其内在的CL能力。我们的实验表明,MIGU可普遍适用于所有三种LM架构(T5、RoBERTa和Llama2),在四个CL基准测试中持续微调和持续预训练设置中提供最先进或与之相当的性能。例如,在一个包含15个任务的CL基准测试中,MIGU相比传统的参数高效微调基线带来了15.2%的平均准确率提升。MIGU还可以与所有三种现有的CL类型无缝集成,以进一步提升性能。代码可在https://github.com/wenyudu/MIGU{此处为https链接}找到。
English
Language models (LMs) exhibit impressive performance and generalization capabilities. However, LMs struggle with the persistent challenge of catastrophic forgetting, which undermines their long-term sustainability in continual learning (CL). Existing approaches usually address the issue by incorporating old task data or task-wise inductive bias into LMs. However, old data and accurate task information are often unavailable or costly to collect, hindering the availability of current CL approaches for LMs. To address this limitation, we introduce MIGU (MagnItude-based Gradient Updating for continual learning), a rehearsal-free and task-label-free method that only updates the model parameters with large magnitudes of output in LMs' linear layers. MIGU is based on our observation that the L1-normalized magnitude distribution of the output in LMs' linear layers is different when the LM models deal with different task data. By imposing this simple constraint on the gradient update process, we can leverage the inherent behaviors of LMs, thereby unlocking their innate CL abilities. Our experiments demonstrate that MIGU is universally applicable to all three LM architectures (T5, RoBERTa, and Llama2), delivering state-of-the-art or on-par performance across continual finetuning and continual pre-training settings on four CL benchmarks. For example, MIGU brings a 15.2% average accuracy improvement over conventional parameter-efficient finetuning baselines in a 15-task CL benchmark. MIGU can also seamlessly integrate with all three existing CL types to further enhance performance. Code is available at https://github.com/wenyudu/MIGU{this https URL}.

Summary

AI-Generated Summary

PDF311November 29, 2024