ChatPaper.aiChatPaper

學習,快與慢:邁向持續適應的大型語言模型

Learning, Fast and Slow: Towards LLMs That Adapt Continually

May 12, 2026
作者: Rishabh Tiwari, Kusha Sareen, Lakshya A Agrawal, Joseph E. Gonzalez, Matei Zaharia, Kurt Keutzer, Inderjit S Dhillon, Rishabh Agarwal, Devvrit Khatri
cs.AI

摘要

大型语言模型(LLMs)通常通过更新模型参数(例如,通过强化学习)来训练下游任务。然而,参数更新迫使模型吸收任务特定信息,这可能导致灾难性遗忘和可塑性丧失。相比之下,使用固定LLM参数的上下文学习能够廉价且快速地适应任务特定需求(如提示优化),但其本身通常无法达到通过更新LLM参数所能获得的性能增益。将学习限制在上下文或权重内部的做法缺乏充分理由。此外,人类也可能在不同时间尺度上学习(如系统1与系统2)。为此,我们提出一种适用于LLMs的快慢学习框架,其中模型参数作为"慢权重",而优化后的上下文作为"快权重"。这些快"权重"能够从文本反馈中学习以吸收任务特定信息,同时允许慢权重更接近基础模型并保持通用推理行为。快速-慢速训练(FST)在推理任务上的样本效率最高可达仅慢学习(强化学习)的3倍,并持续达到更高的性能渐近线。此外,经FST训练的模型更接近基础LLM(KL散度降低高达70%),从而比强化学习训练产生更少的灾难性遗忘。这种偏移减少也保留了可塑性:在完成一项任务训练后,经FST训练的模型比仅参数训练的模型能更有效地适应后续任务。在任务领域动态变化的持续学习场景中,FST持续获取每个新任务,而仅参数训练的强化学习则陷入停滞。
English
Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM parameters can cheaply and rapidly adapt to task-specific requirements (e.g., prompt optimization), but cannot by itself typically match the performance gains available through updating LLM parameters. There is no good reason for restricting learning to being in-context or in-weights. Moreover, humans also likely learn at different time scales (e.g., System 1 vs 2). To this end, we introduce a fast-slow learning framework for LLMs, with model parameters as "slow" weights and optimized context as "fast" weights. These fast "weights" can learn from textual feedback to absorb the task-specific information, while allowing slow weights to stay closer to the base model and persist general reasoning behaviors. Fast-Slow Training (FST) is up to 3x more sample-efficient than only slow learning (RL) across reasoning tasks, while consistently reaching a higher performance asymptote. Moreover, FST-trained models remain closer to the base LLM (up to 70% less KL divergence), resulting in less catastrophic forgetting than RL-training. This reduced drift also preserves plasticity: after training on one task, FST trained models adapt more effectively to a subsequent task than parameter-only trained models. In continual learning scenarios, where task domains change on the fly, FST continues to acquire each new task while parameter-only RL stalls.
PDF102May 14, 2026