FinGPT:針對小型語言的大型生成模型
FinGPT: Large Generative Models for a Small Language
November 3, 2023
作者: Risto Luukkonen, Ville Komulainen, Jouni Luoma, Anni Eskelinen, Jenna Kanerva, Hanna-Mari Kupari, Filip Ginter, Veronika Laippala, Niklas Muennighoff, Aleksandra Piktus, Thomas Wang, Nouamane Tazi, Teven Le Scao, Thomas Wolf, Osma Suominen, Samuli Sairanen, Mikko Merioksa, Jyrki Heinonen, Aija Vahtola, Samuel Antao, Sampo Pyysalo
cs.AI
摘要
大型語言模型(LLMs)在自然語言處理(NLP)等多項任務中表現出色,但大多數開放模型對較小語言的覆蓋範圍非常有限,而且LLM的研究往往集中在那些具有幾乎無限的預訓練數據的語言上。在這項研究中,我們探討了為芬蘭語創建LLMs所面臨的挑戰,芬蘭語是全球人口不到0.1%的人口使用的語言。我們通過整合網絡爬蟲、新聞、社交媒體和電子書等來源,編制了一個龐大的芬蘭語數據集。我們採用兩種方法來預訓練模型:1)我們從頭開始訓練了七個單語模型(從186M到13B參數),命名為FinGPT;2)我們在多語言BLOOM模型上繼續對其原始訓練數據和芬蘭語進行預訓練,得到一個包含1760億參數的模型,我們稱之為BLUUMI。為了評估模型,我們引入了FIN-bench,這是一個具有芬蘭語任務的BIG-bench版本。我們還評估了其他模型質量,如毒性和偏見。我們的模型和工具可以在https://turkunlp.org/gpt3-finnish 公開獲取。
English
Large language models (LLMs) excel in many tasks in NLP and beyond, but most
open models have very limited coverage of smaller languages and LLM work tends
to focus on languages where nearly unlimited data is available for pretraining.
In this work, we study the challenges of creating LLMs for Finnish, a language
spoken by less than 0.1% of the world population. We compile an extensive
dataset of Finnish combining web crawls, news, social media and eBooks. We
pursue two approaches to pretrain models: 1) we train seven monolingual models
from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the
pretraining of the multilingual BLOOM model on a mix of its original training
data and Finnish, resulting in a 176 billion parameter model we call BLUUMI.
For model evaluation, we introduce FIN-bench, a version of BIG-bench with
Finnish tasks. We also assess other model qualities such as toxicity and bias.
Our models and tools are openly available at https://turkunlp.org/gpt3-finnish.