ChatPaper.aiChatPaper

Aurora-M:根據美國行政命令進行紅隊測試的第一個開源多語言語言模型

Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order

March 30, 2024
作者: Taishi Nakamura, Mayank Mishra, Simone Tedeschi, Yekun Chai, Jason T Stillerman, Felix Friedrich, Prateek Yadav, Tanmay Laud, Vu Minh Chien, Terry Yue Zhuo, Diganta Misra, Ben Bogin, Xuan-Son Vu, Marzena Karpinska, Arnav Varma Dantuluri, Wojciech Kusa, Tommaso Furlanello, Rio Yokota, Niklas Muennighoff, Suhas Pai, Tosin Adewumi, Veronika Laippala, Xiaozhe Yao, Adalberto Junior, Alpay Ariyak, Aleksandr Drozd, Jordan Clive, Kshitij Gupta, Liangyu Chen, Qi Sun, Ken Tsui, Noah Persaud, Nour Fahmy, Tianlong Chen, Mohit Bansal, Nicolo Monti, Tai Dang, Ziyang Luo, Tien-Tung Bui, Roberto Navigli, Virendra Mehta, Matthew Blumberg, Victor May, Huu Nguyen, Sampo Pyysalo
cs.AI

摘要

預訓練語言模型支撐多項人工智慧應用,但其高訓練計算成本限制了可及性。BLOOM和StarCoder等倡議旨在民主化預訓練模型的訪問,以促進社區協作發展。然而,現有模型面臨著挑戰:多語言能力有限、持續預訓練導致災難性遺忘,而從頭開始預訓練計算成本高,且需遵守人工智慧安全和發展法規。本文介紹了Aurora-M,一個擁有 15B 參數的多語言開源模型,訓練語言包括英語、芬蘭語、印地語、日語、越南語和程式碼。Aurora-M 不斷從 StarCoderPlus 上額外的 4350 億標記進行持續預訓練,總訓練標記數超過 2 兆。它是第一個在人工審查的安全指示上進行微調的開源多語言模型,因此不僅符合傳統的紅隊考量,還符合拜登-哈里斯行政命令中關於人工智慧的安全、安全和可信發展和使用所表達的具體關切。Aurora-M 在各種任務和語言上經過嚴格評估,展現了對抗災難性遺忘的穩健性,並在多語言環境中表現優異,特別是在安全評估方面勝過其他選擇。為了促進負責任的開源大型語言模型發展,Aurora-M 及其變體已在以下網址釋出:https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407。
English
Pretrained language models underpin several AI applications, but their high computational cost for training limits accessibility. Initiatives such as BLOOM and StarCoder aim to democratize access to pretrained models for collaborative community development. However, such existing models face challenges: limited multilingual capabilities, continual pretraining causing catastrophic forgetting, whereas pretraining from scratch is computationally expensive, and compliance with AI safety and development laws. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Aurora-M is rigorously evaluated across various tasks and languages, demonstrating robustness against catastrophic forgetting and outperforming alternatives in multilingual settings, particularly in safety evaluations. To promote responsible open-source LLM development, Aurora-M and its variants are released at https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 .

Summary

AI-Generated Summary

PDF431November 26, 2024