強化學習微調大型語言模型中的小型子網路
Reinforcement Learning Finetunes Small Subnetworks in Large Language Models
May 16, 2025
作者: Sagnik Mukherjee, Lifan Yuan, Dilek Hakkani-Tur, Hao Peng
cs.AI
摘要
強化學習(RL)在大型語言模型(LLMs)的下游任務表現和與人類價值觀的對齊方面帶來了顯著提升。令人驚訝的是,如此大的增益僅來自於更新一個小規模的子網絡,該子網絡僅包含5%到30%的參數,其餘參數實際上保持不變。我們將這一現象稱為由RL引起的參數更新稀疏性。在我們實驗中,這一現象在所有7種廣泛使用的RL算法(例如PPO、GRPO、DPO)和來自不同家族的10種LLMs中均被觀察到。這種稀疏性是內在的,並且在沒有任何顯式稀疏性促進正則化或架構約束的情況下發生。僅微調子網絡即可恢復測試準確率,並且值得注意的是,生成的模型與通過全面微調獲得的模型幾乎相同。來自不同隨機種子、訓練數據甚至RL算法的子網絡顯示出比隨機預期更大的重疊。我們的分析表明,這種稀疏性並非由於僅更新部分層,而是幾乎所有參數矩陣都接收到類似的稀疏更新。此外,對幾乎所有參數矩陣的更新幾乎都是滿秩的,這表明RL更新了一小部分參數,但這些參數卻幾乎跨越了參數矩陣所能表示的完整子空間。我們推測,這種更新稀疏性主要歸因於在接近策略分佈的數據上進行訓練,而鼓勵策略保持接近預訓練模型的技術(如KL正則化和梯度裁剪)影響有限。
English
Reinforcement learning (RL) yields substantial improvements in large language
models (LLMs) downstream task performance and alignment with human values.
Surprisingly, such large gains result from updating only a small subnetwork
comprising just 5 percent to 30 percent of the parameters, with the rest
effectively unchanged. We refer to this phenomenon as parameter update sparsity
induced by RL. It is observed across all 7 widely used RL algorithms (e.g.,
PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments.
This sparsity is intrinsic and occurs without any explicit sparsity promoting
regularizations or architectural constraints. Finetuning the subnetwork alone
recovers the test accuracy, and, remarkably, produces a model nearly identical
to the one obtained via full finetuning. The subnetworks from different random
seeds, training data, and even RL algorithms show substantially greater overlap
than expected by chance. Our analysis suggests that this sparsity is not due to
updating only a subset of layers, instead, nearly all parameter matrices
receive similarly sparse updates. Moreover, the updates to almost all parameter
matrices are nearly full-rank, suggesting RL updates a small subset of
parameters that nevertheless span almost the full subspaces that the parameter
matrices can represent. We conjecture that the this update sparsity can be
primarily attributed to training on data that is near the policy distribution,
techniques that encourage the policy to remain close to the pretrained model,
such as the KL regularization and gradient clipping, have limited impact.Summary
AI-Generated Summary