适配器:用于参数高效和模块化迁移学习的统一库
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
November 18, 2023
作者: Clifton Poth, Hannah Sterz, Indraneil Paul, Sukannya Purkayastha, Leon Engländer, Timo Imhof, Ivan Vulić, Sebastian Ruder, Iryna Gurevych, Jonas Pfeiffer
cs.AI
摘要
我们介绍了 Adapters,这是一个开源库,统一了大型语言模型中参数高效和模块化的迁移学习。通过将10种不同的适配器方法集成到统一接口中,Adapters 提供了易用性和灵活的配置。我们的库允许研究人员和实践者通过组合块利用适配器模块化,从而设计复杂的适配器设置。我们通过在各种自然语言处理任务上评估其性能,展示了该库的有效性。Adapters 提供了一个强大的工具,用于解决传统微调范式的挑战,并推动更高效和模块化的迁移学习。该库可通过 https://adapterhub.ml/adapters 获取。
English
We introduce Adapters, an open-source library that unifies
parameter-efficient and modular transfer learning in large language models. By
integrating 10 diverse adapter methods into a unified interface, Adapters
offers ease of use and flexible configuration. Our library allows researchers
and practitioners to leverage adapter modularity through composition blocks,
enabling the design of complex adapter setups. We demonstrate the library's
efficacy by evaluating its performance against full fine-tuning on various NLP
tasks. Adapters provides a powerful tool for addressing the challenges of
conventional fine-tuning paradigms and promoting more efficient and modular
transfer learning. The library is available via https://adapterhub.ml/adapters.