强化学习微调大型语言模型中的小型子网络
Reinforcement Learning Finetunes Small Subnetworks in Large Language Models
May 16, 2025
作者: Sagnik Mukherjee, Lifan Yuan, Dilek Hakkani-Tur, Hao Peng
cs.AI
摘要
强化学习(RL)在大型语言模型(LLMs)的下游任务性能及与人类价值观的对齐方面带来了显著提升。令人惊讶的是,如此大的改进仅通过更新包含5%至30%参数的小型子网络实现,其余部分基本保持不变。我们将这一现象称为由RL引发的参数更新稀疏性。这一现象在我们实验中的所有7种广泛使用的RL算法(如PPO、GRPO、DPO)及来自不同家族的10种LLMs中均被观察到。这种稀疏性是内在的,无需任何显式的稀疏性促进正则化或架构约束。仅微调子网络即可恢复测试准确率,且值得注意的是,生成的模型与通过完全微调获得的模型几乎相同。来自不同随机种子、训练数据乃至RL算法的子网络显示出远超偶然预期的重叠度。我们的分析表明,这种稀疏性并非源于仅更新部分层,而是几乎所有参数矩阵都接收了类似的稀疏更新。此外,对几乎所有参数矩阵的更新几乎都是满秩的,这表明RL更新了一小部分参数,但这些参数却几乎覆盖了参数矩阵所能表示的完整子空间。我们推测,这种更新稀疏性主要归因于在接近策略分布的数据上进行训练,而促使策略保持接近预训练模型的技术,如KL正则化和梯度裁剪,影响有限。
English
Reinforcement learning (RL) yields substantial improvements in large language
models (LLMs) downstream task performance and alignment with human values.
Surprisingly, such large gains result from updating only a small subnetwork
comprising just 5 percent to 30 percent of the parameters, with the rest
effectively unchanged. We refer to this phenomenon as parameter update sparsity
induced by RL. It is observed across all 7 widely used RL algorithms (e.g.,
PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments.
This sparsity is intrinsic and occurs without any explicit sparsity promoting
regularizations or architectural constraints. Finetuning the subnetwork alone
recovers the test accuracy, and, remarkably, produces a model nearly identical
to the one obtained via full finetuning. The subnetworks from different random
seeds, training data, and even RL algorithms show substantially greater overlap
than expected by chance. Our analysis suggests that this sparsity is not due to
updating only a subset of layers, instead, nearly all parameter matrices
receive similarly sparse updates. Moreover, the updates to almost all parameter
matrices are nearly full-rank, suggesting RL updates a small subset of
parameters that nevertheless span almost the full subspaces that the parameter
matrices can represent. We conjecture that the this update sparsity can be
primarily attributed to training on data that is near the policy distribution,
techniques that encourage the policy to remain close to the pretrained model,
such as the KL regularization and gradient clipping, have limited impact.Summary
AI-Generated Summary