探索大语言模型的联邦剪枝技术
Exploring Federated Pruning for Large Language Models
May 19, 2025
作者: Pengxin Guo, Yinong Wang, Wei Li, Mengting Liu, Ming Li, Jinkai Zheng, Liangqiong Qu
cs.AI
摘要
大语言模型(LLM)剪枝技术作为一种有前景的压缩手段,正推动LLM在资源受限设备上的部署。然而,现有方法通常依赖于公开校准样本的获取,这在注重隐私的领域中面临挑战。为解决这一问题,我们提出了FedPrLLM,一个全面的联邦剪枝框架,专为保护隐私的LLM压缩而设计。在FedPrLLM中,每个客户端仅需基于本地校准数据计算剪枝掩码矩阵,并与服务器共享以修剪全局模型。这一方法实现了在保护本地数据隐私的同时,利用各客户端知识协同修剪全局模型。此外,我们通过大量实验探索了FedPrLLM框架内的多种可能性,包括不同的对比组、剪枝策略以及权重缩放决策。深入评估表明,在FedPrLLM框架下,采用层间对比且不进行权重缩放的一次性剪枝是最优选择。我们期望本工作能为隐私敏感领域中的LLM剪枝研究提供指导。相关代码已发布于https://github.com/Pengxin-Guo/FedPrLLM。
English
LLM pruning has emerged as a promising technology for compressing LLMs,
enabling their deployment on resource-limited devices. However, current
methodologies typically require access to public calibration samples, which can
be challenging to obtain in privacy-sensitive domains. To address this issue,
we introduce FedPrLLM, a comprehensive federated pruning framework designed for
the privacy-preserving compression of LLMs. In FedPrLLM, each client only needs
to calculate a pruning mask matrix based on its local calibration data and
share it with the server to prune the global model. This approach allows for
collaborative pruning of the global model with the knowledge of each client
while maintaining local data privacy. Additionally, we conduct extensive
experiments to explore various possibilities within the FedPrLLM framework,
including different comparison groups, pruning strategies, and the decision to
scale weights. Our extensive evaluation reveals that one-shot pruning with
layer comparison and no weight scaling is the optimal choice within the
FedPrLLM framework. We hope our work will help guide future efforts in pruning
LLMs in privacy-sensitive fields. Our code is available at
https://github.com/Pengxin-Guo/FedPrLLM.Summary
AI-Generated Summary