ChatPaper.aiChatPaper

深层的不合理无效性

The Unreasonable Ineffectiveness of the Deeper Layers

March 26, 2024
作者: Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, Daniel A. Roberts
cs.AI

摘要

我们对流行的开放权重预训练大型语言模型进行了实证研究,发现在删除大部分层(高达一半)之前,不同问答基准测试的性能几乎没有下降。为了剪枝这些模型,我们通过考虑层间的相似性来确定最佳的剪枝层块;然后,为了“修复”损伤,我们进行少量微调。具体来说,我们使用参数高效微调(PEFT)方法,特别是量化和低秩适配器(QLoRA),以便我们的每个实验都可以在单个A100 GPU上执行。从实际角度看,这些结果表明层剪枝方法可以辅助其他PEFT策略,进一步减少微调的计算资源,另一方面可以提高推理的内存和延迟。从科学角度看,这些大型语言模型对删除层的鲁棒性意味着当前的预训练方法要么没有充分利用网络更深层的参数,要么浅层在存储知识方面发挥了关键作用。
English
We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, to "heal" the damage, we perform a small amount of finetuning. In particular, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single A100 GPU. From a practical perspective, these results suggest that layer pruning methods can complement other PEFT strategies to further reduce computational resources of finetuning on the one hand, and can improve the memory and latency of inference on the other hand. From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge.
PDF8214December 15, 2024