LLaMA-NAS:大型語言模型的高效神經架構搜索
LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models
May 28, 2024
作者: Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Sairam Sundaresan
cs.AI
摘要
現代大型語言模型(LLMs)在解決自然語言處理、複雜推理、情感分析等任務方面的能力非凡,這促使它們被廣泛採用。不幸的是,這些能力伴隨著非常高的記憶和計算成本,使得大多數硬件平台無法使用LLMs。為了緩解這一問題,我們提出了一種有效的方法,基於LLaMA2-7B使用一次NAS來找到帕累托最優網絡架構。具體來說,我們僅對LLaMA2-7B進行微調一次,然後應用基於遺傳算法的搜索來找到更小、計算複雜度更低的網絡架構。我們展示了對於某些標準基準任務,預訓練的LLaMA2-7B網絡是不必要的大和複雜。更具體地,我們展示了在某些任務中模型大小減少1.5倍,吞吐量加快1.3倍,而精度幾乎不下降。除了找到更小、性能更高的網絡架構外,我們的方法比某些剪枝或稀疏化技術更有效和高效地實現了這一目標。最後,我們展示了量化如何與我們的方法互補,並且我們發現的網絡的大小和複雜度可以通過量化進一步減少。我們相信我們的工作提供了一種自動創建LLMs的方法,可以在成本更低、更容易獲得的硬件平台上使用。
English
The abilities of modern large language models (LLMs) in solving natural
language processing, complex reasoning, sentiment analysis and other tasks have
been extraordinary which has prompted their extensive adoption. Unfortunately,
these abilities come with very high memory and computational costs which
precludes the use of LLMs on most hardware platforms. To mitigate this, we
propose an effective method of finding Pareto-optimal network architectures
based on LLaMA2-7B using one-shot NAS. In particular, we fine-tune LLaMA2-7B
only once and then apply genetic algorithm-based search to find smaller, less
computationally complex network architectures. We show that, for certain
standard benchmark tasks, the pre-trained LLaMA2-7B network is unnecessarily
large and complex. More specifically, we demonstrate a 1.5x reduction in model
size and 1.3x speedup in throughput for certain tasks with negligible drop in
accuracy. In addition to finding smaller, higher-performing network
architectures, our method does so more effectively and efficiently than certain
pruning or sparsification techniques. Finally, we demonstrate how quantization
is complementary to our method and that the size and complexity of the networks
we find can be further decreased using quantization. We believe that our work
provides a way to automatically create LLMs which can be used on less expensive
and more readily available hardware platforms.Summary
AI-Generated Summary