LLM外科医生
The LLM Surgeon
December 28, 2023
作者: Tycho F. A. van der Ouderaa, Markus Nagel, Mart van Baalen, Yuki M. Asano, Tijmen Blankevoort
cs.AI
摘要
为了在大量可用文本数据的大语料库上实现最佳性能,最先进的语言模型正变得越来越庞大。然而,Transformer架构的庞大规模使得在计算、环境或设备特定约束内部署模型变得困难。我们探讨了对现有预训练模型进行数据驱动压缩作为训练较小模型的替代方法。为此,我们将目标损失景观的Kronecker分解曲率逼近扩展到大型语言模型。通过这样做,我们可以计算可以移除的结构的动态分配,以及考虑到这种移除的剩余权重的更新。我们提供了一个通用框架,用于非结构化、半结构化和结构化剪枝,并改进了权重更新以捕捉更多权重之间的相关性,同时保持计算效率。在实验中,我们的方法可以将OPT模型和Llamav2-7B的行和列剪枝20%-30%,性能几乎没有损失,并在大型语言模型的非结构化和半结构化剪枝方面取得了最先进的结果。
English
State-of-the-art language models are becoming increasingly large in an effort
to achieve the highest performance on large corpora of available textual data.
However, the sheer size of the Transformer architectures makes it difficult to
deploy models within computational, environmental or device-specific
constraints. We explore data-driven compression of existing pretrained models
as an alternative to training smaller models from scratch. To do so, we scale
Kronecker-factored curvature approximations of the target loss landscape to
large language models. In doing so, we can compute both the dynamic allocation
of structures that can be removed as well as updates of remaining weights that
account for the removal. We provide a general framework for unstructured,
semi-structured and structured pruning and improve upon weight updates to
capture more correlations between weights, while remaining computationally
efficient. Experimentally, our method can prune rows and columns from a range
of OPT models and Llamav2-7B by 20%-30%, with a negligible loss in performance,
and achieve state-of-the-art results in unstructured and semi-structured
pruning of large language models.