Tied-Lora: 通过权重捆绑提高 LoRA 的参数效率
Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying
November 16, 2023
作者: Adithya Renduchintala, Tugrul Konuk, Oleksii Kuchaiev
cs.AI
摘要
我们提出了Tied-LoRA,这是一种简单的范式,利用权重绑定和选择性训练,进一步增加了低秩适应(LoRA)方法的参数效率。我们的研究涵盖了所有可行的参数训练/冻结组合,结合权重绑定,以确定在性能和可训练参数数量之间的最佳平衡。通过涵盖各种任务和两个基础语言模型的实验,我们提供了分析结果,揭示了效率和性能之间的权衡。我们的实验揭示了一个特定的Tied-LoRA配置,通过仅利用标准LoRA方法使用的参数的13%,在几个任务中展现出可比较的性能。
English
We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective
training to further increase parameter efficiency of the Low-rank adaptation
(LoRA) method. Our investigations include all feasible combinations parameter
training/freezing in conjunction with weight tying to identify the optimal
balance between performance and the number of trainable parameters. Through
experiments covering a variety of tasks and two base language models, we
provide analysis revealing trade-offs between efficiency and performance. Our
experiments uncovered a particular Tied-LoRA configuration that stands out by
demonstrating comparable performance across several tasks while employing only
13~\% percent of parameters utilized by the standard LoRA method.