ChatPaper.aiChatPaper

通過建立和重複使用一個 LoRA 函式庫來實現模塊化 LLMs。

Towards Modular LLMs by Building and Reusing a Library of LoRAs

May 18, 2024
作者: Oleksiy Ostapenko, Zhan Su, Edoardo Maria Ponti, Laurent Charlin, Nicolas Le Roux, Matheus Pereira, Lucas Caccia, Alessandro Sordoni
cs.AI

摘要

隨著基礎大型語言模型(LLM)的參數高效適應越來越多,我們需要研究是否可以重複使用這些訓練過的適配器來提高新任務的性能。我們研究如何在給定多任務數據的情況下最佳地構建適配器庫,並通過在該庫中進行路由來設計零-shot和監督任務泛化的技術。我們對構建此庫的現有方法進行基準測試,並引入基於模型的聚類(MBC)方法,該方法根據適配器參數的相似性對任務進行分組,間接優化跨多任務數據集的轉移。為了重複使用這個庫,我們提出了一種新穎的零-shot路由機制Arrow,它能夠動態選擇最相關的適配器來處理新輸入,而無需重新訓練。我們在多個LLM上進行實驗,如Phi-2和Mistral,在廣泛的保留任務上進行驗證,證實基於MBC的適配器和Arrow路由能夠更好地泛化到新任務。我們朝著創建模塊化、適應性強的LLM邁出了一步,這種模型可以與傳統聯合訓練相匹敵或超越。
English
The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trained adapters to improve performance for new tasks. We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. We make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.

Summary

AI-Generated Summary

PDF315December 15, 2024