LLaMA 跨越英語:關於語言能力轉移的實證研究
LLaMA Beyond English: An Empirical Study on Language Capability Transfer
January 2, 2024
作者: Jun Zhao, Zhihao Zhang, Qi Zhang, Tao Gui, Xuanjing Huang
cs.AI
摘要
近年來,大型語言模型(LLMs)取得了顯著的進展,如ChatGPT所展示的,在各種複雜任務中表現出卓越的能力。然而,許多主流的LLMs(例如LLaMA)是在以英語為主的語料庫上預訓練的,這限制了它們在其他非英語語言中的表現。本文專注於如何有效地將語言生成和遵循指示的能力轉移到非英語語言。為了回答這個問題,我們基於LLaMA進行了一項廣泛的實證研究,耗時超過1440 GPU小時。我們分析了詞彙擴展、進一步的預訓練和指示調整等關鍵因素對轉移的影響。為了準確評估模型的知識水平,我們使用了四個廣泛使用的標準化測試基準:C-Eval、MMLU、AGI-Eval和GAOKAO-Bench。此外,我們進行了對模型回應質量的全面評估,考慮了準確性、流暢性、信息量、邏輯連貫性和無害性等方面,基於LLM-Eval,這是一個包含來自17個不同類別指示任務的基準。我們的評估結果表明,在知識對齊和回應質量方面,可以在不到1%的預訓練數據下實現與最先進轉移模型相當的性能。此外,十三種低資源語言的實驗結果也呈現出類似的趨勢。我們預期實驗揭示的結論將有助於社群開發非英語LLMs。
English
In recent times, substantial advancements have been witnessed in large
language models (LLMs), exemplified by ChatGPT, showcasing remarkable
proficiency across a range of complex tasks. However, many mainstream LLMs
(e.g. LLaMA) are pretrained on English-dominant corpus, which limits their
performance in other non-English languages. In this paper, we focus on how to
effectively transfer the capabilities of language generation and following
instructions to a non-English language. To answer this question, we conduct an
extensive empirical investigation based on LLaMA, accumulating over 1440 GPU
hours. We analyze the impact of key factors such as vocabulary extension,
further pretraining, and instruction tuning on transfer. To accurately assess
the model's level of knowledge, we employ four widely used standardized testing
benchmarks: C-Eval, MMLU, AGI-Eval, and GAOKAO-Bench. Furthermore, a
comprehensive evaluation of the model's response quality is conducted,
considering aspects such as accuracy, fluency, informativeness, logical
coherence, and harmlessness, based on LLM-Eval, a benchmarks consisting
instruction tasks from 17 diverse categories. Our evaluation results
demonstrate that comparable performance to state-of-the-art transfer models can
be achieved with less than 1% of the pretraining data, both in terms of
knowledge alignment and response quality. Furthermore, the experimental
outcomes across the thirteen low-resource languages also exhibit similar
trends. We anticipate that the conclusions revealed by the experiments will aid
the community in developing non-English LLMs.Summary
AI-Generated Summary