ChatPaper.aiChatPaper

Tied-Lora: 通過權重綁定增強 LoRA 的參數效率

Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying

November 16, 2023
作者: Adithya Renduchintala, Tugrul Konuk, Oleksii Kuchaiev
cs.AI

摘要

我們提出了Tied-LoRA,一種簡單的範式,利用權重綁定和選擇性訓練,進一步提高低秩適應(LoRA)方法的參數效率。我們的研究包括所有可行的參數訓練/凍結組合,結合權重綁定,以確定在性能和可訓練參數數量之間的最佳平衡。通過涵蓋各種任務和兩個基礎語言模型的實驗,我們提供了分析,揭示了效率和性能之間的折衷。我們的實驗揭示了一個特定的Tied-LoRA配置,通過僅使用標準LoRA方法使用的參數的13%,在幾個任務中展現出可比擬的性能。
English
We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective training to further increase parameter efficiency of the Low-rank adaptation (LoRA) method. Our investigations include all feasible combinations parameter training/freezing in conjunction with weight tying to identify the optimal balance between performance and the number of trainable parameters. Through experiments covering a variety of tasks and two base language models, we provide analysis revealing trade-offs between efficiency and performance. Our experiments uncovered a particular Tied-LoRA configuration that stands out by demonstrating comparable performance across several tasks while employing only 13~\% percent of parameters utilized by the standard LoRA method.
PDF160December 15, 2024