ChatPaper.aiChatPaper

通过构建和重复使用LoRAs库实现模块化LLMs

Towards Modular LLMs by Building and Reusing a Library of LoRAs

May 18, 2024
作者: Oleksiy Ostapenko, Zhan Su, Edoardo Maria Ponti, Laurent Charlin, Nicolas Le Roux, Matheus Pereira, Lucas Caccia, Alessandro Sordoni
cs.AI

摘要

随着基于大型语言模型(LLM)的参数高效适应版本数量不断增加,迫使我们研究是否可以重复使用这些训练好的适配器来提高新任务的性能。我们研究了如何在给定多任务数据的情况下最佳构建适配器库,并通过在该库中进行路由设计技术,实现零-shot和监督任务泛化。我们对构建此库的现有方法进行了基准测试,并引入了基于模型的聚类(MBC)方法,该方法根据适配器参数的相似性对任务进行分组,间接优化跨多任务数据集的转移。为了重复使用该库,我们提出了一种新颖的零-shot路由机制,Arrow,它能够动态选择最相关的适配器用于新输入,无需重新训练。我们在多个LLM(如Phi-2和Mistral)上进行实验,针对大量保留任务验证了基于MBC的适配器和Arrow路由能够更好地泛化到新任务。我们正在努力创建模块化、可适应的LLM,能够与传统的联合训练相匹敌甚至胜过。
English
The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trained adapters to improve performance for new tasks. We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. We make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.

Summary

AI-Generated Summary

PDF315December 15, 2024