探索模型亲缘性以合并大型语言模型
Exploring Model Kinship for Merging Large Language Models
October 16, 2024
作者: Yedi Hu, Yunzhi Yao, Ningyu Zhang, Shumin Deng, Huajun Chen
cs.AI
摘要
模型合并已成为增强大型语言模型(LLMs)能力和效率的关键技术之一。然而,我们对于合并任意两个模型时预期性能提升和原则的理解仍然有限。在这项工作中,我们引入了模型亲缘性的概念,即LLMs之间的相似度或相关性程度,类似于生物进化。通过全面的实证分析,我们发现模型亲缘性与模型合并后性能提升之间存在一定关系,这有助于指导我们选择候选模型。受此启发,我们提出了一种新的模型合并策略:基于模型亲缘性的Top-k贪婪合并,可以在基准数据集上实现更好的性能。具体而言,我们发现将模型亲缘性作为一个标准可以帮助我们持续进行模型合并,缓解模型演化中的退化(局部最优解),而模型亲缘性可以作为一个指导来避开这些陷阱。代码可在 https://github.com/zjunlp/ModelKinship 获取。
English
Model merging has become one of the key technologies for enhancing the
capabilities and efficiency of Large Language Models (LLMs). However, our
understanding of the expected performance gains and principles when merging any
two models remains limited. In this work, we introduce model kinship, the
degree of similarity or relatedness between LLMs, analogous to biological
evolution. With comprehensive empirical analysis, we find that there is a
certain relationship between model kinship and the performance gains after
model merging, which can help guide our selection of candidate models. Inspired
by this, we propose a new model merging strategy: Top-k Greedy Merging with
Model Kinship, which can yield better performance on benchmark datasets.
Specifically, we discover that using model kinship as a criterion can assist us
in continuously performing model merging, alleviating the degradation (local
optima) in model evolution, whereas model kinship can serve as a guide to
escape these traps. Code is available at
https://github.com/zjunlp/ModelKinship.Summary
AI-Generated Summary