基於激活的大型語言模型合併
Activation-Informed Merging of Large Language Models
February 4, 2025
作者: Amin Heyrani Nobari, Kaveh Alimohammadi, Ali ArjomandBigdeli, Akash Srivastava, Faez Ahmed, Navid Azizan
cs.AI
摘要
模型合併是一種方法,它結合了多個經過微調的大型語言模型(LLMs)的參數和嵌入,提供了一種有前途的方法來增強模型在各種任務中的性能,同時保持計算效率。本文介紹了一種稱為啟動信息合併(AIM)的技術,該技術將LLMs的激活空間中的信息整合到合併過程中,以提高性能和韌性。AIM被設計為一種靈活的、補充性的解決方案,適用於任何現有的合併方法。它旨在保留來自基本模型的關鍵權重,借鑒了持續學習(CL)和模型壓縮的原則。利用一個與任務無關的校準集,AIM在合併過程中有選擇地優先考慮關鍵權重。我們以實證方式證明,AIM顯著提升了合併模型在多個基準測試中的性能。我們的研究結果表明,考慮激活空間信息可以在LLMs的模型合併策略中帶來顯著進展,基準性能提高了多達40%。
English
Model merging, a method that combines the parameters and embeddings of
multiple fine-tuned large language models (LLMs), offers a promising approach
to enhance model performance across various tasks while maintaining
computational efficiency. This paper introduces Activation-Informed Merging
(AIM), a technique that integrates the information from the activation space of
LLMs into the merging process to improve performance and robustness. AIM is
designed as a flexible, complementary solution that is applicable to any
existing merging method. It aims to preserve critical weights from the base
model, drawing on principles from continual learning~(CL) and model
compression. Utilizing a task-agnostic calibration set, AIM selectively
prioritizes essential weights during merging. We empirically demonstrate that
AIM significantly enhances the performance of merged models across multiple
benchmarks. Our findings suggest that considering the activation-space
information can provide substantial advancements in the model merging
strategies for LLMs with up to 40\% increase in benchmark performance.Summary
AI-Generated Summary