Astraios:参数高效的指导调整代码大型语言模型
Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models
January 1, 2024
作者: Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, Niklas Muennighoff
cs.AI
摘要
大型语言模型(LLMs)的全参数微调(FFT)成本高昂,导致了一系列参数高效微调(PEFT)方法的出现。然而,目前尚不清楚在不同模型规模下,哪种方法能够提供最佳的成本-性能折衷。我们引入了Astraios,这是一套包含28个经过指令调整的OctoCoder模型,使用7种微调方法和4种模型规模,最多达到160亿个参数。通过对涵盖代码理解和代码生成任务的5个任务和8个不同数据集的研究,我们发现FFT通常在各种规模下都能带来最佳的下游性能,而基于模型规模,PEFT方法的有效性存在显著差异。LoRA通常提供了成本和性能之间最有利的折衷。进一步研究这些方法对模型稳健性和代码安全性的影响发现,较大模型往往表现出较低的稳健性和较弱的安全性。最后,我们探讨了更新参数、交叉熵损失和任务性能之间的关系。我们发现,在小型模型中观察到的调整有效性可以很好地推广到较大模型,并且指令调整中的验证损失可以作为整体下游性能的可靠指标。
English
The high cost of full-parameter fine-tuning (FFT) of Large Language Models
(LLMs) has led to a series of parameter-efficient fine-tuning (PEFT) methods.
However, it remains unclear which methods provide the best cost-performance
trade-off at different model scales. We introduce Astraios, a suite of 28
instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up
to 16 billion parameters. Through investigations across 5 tasks and 8 different
datasets encompassing both code comprehension and code generation tasks, we
find that FFT generally leads to the best downstream performance across all
scales, and PEFT methods differ significantly in their efficacy based on the
model scale. LoRA usually offers the most favorable trade-off between cost and
performance. Further investigation into the effects of these methods on both
model robustness and code security reveals that larger models tend to
demonstrate reduced robustness and less security. At last, we explore the
relationships among updated parameters, cross-entropy loss, and task
performance. We find that the tuning effectiveness observed in small models
generalizes well to larger models, and the validation loss in instruction
tuning can be a reliable indicator of overall downstream performance.