Aurora-M:首个依据美国行政命令进行红队测试的开源多语言语言模型
Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order
March 30, 2024
作者: Taishi Nakamura, Mayank Mishra, Simone Tedeschi, Yekun Chai, Jason T Stillerman, Felix Friedrich, Prateek Yadav, Tanmay Laud, Vu Minh Chien, Terry Yue Zhuo, Diganta Misra, Ben Bogin, Xuan-Son Vu, Marzena Karpinska, Arnav Varma Dantuluri, Wojciech Kusa, Tommaso Furlanello, Rio Yokota, Niklas Muennighoff, Suhas Pai, Tosin Adewumi, Veronika Laippala, Xiaozhe Yao, Adalberto Junior, Alpay Ariyak, Aleksandr Drozd, Jordan Clive, Kshitij Gupta, Liangyu Chen, Qi Sun, Ken Tsui, Noah Persaud, Nour Fahmy, Tianlong Chen, Mohit Bansal, Nicolo Monti, Tai Dang, Ziyang Luo, Tien-Tung Bui, Roberto Navigli, Virendra Mehta, Matthew Blumberg, Victor May, Huu Nguyen, Sampo Pyysalo
cs.AI
摘要
预训练语言模型支撑着多种AI应用,但其高昂的训练计算成本限制了普及性。诸如BLOOM和StarCoder等项目旨在推动预训练模型的民主化,促进社区协作开发。然而,现有模型面临诸多挑战:多语言能力有限、持续预训练导致灾难性遗忘、从头开始预训练计算成本高昂,以及需遵守AI安全与开发法规。本文介绍Aurora-M,一个拥有150亿参数的多语言开源模型,训练数据涵盖英语、芬兰语、印地语、日语、越南语及代码。Aurora-M从StarCoderPlus基础上持续预训练,额外处理了4350亿个标记,总训练标记数超过2万亿。它是首个基于人工审查安全指令进行微调的开源多语言模型,不仅符合传统的红队测试考量,更契合拜登-哈里斯行政命令中关于人工智能安全、可靠和可信开发与使用的具体关切。Aurora-M在多种任务和语言中经过严格评估,展现出对灾难性遗忘的抵抗力,并在多语言环境下特别是在安全评估方面优于其他模型。为推动负责任的开源大型语言模型(LLM)开发,Aurora-M及其变体已在https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 发布。
English
Pretrained language models underpin several AI applications, but their high
computational cost for training limits accessibility. Initiatives such as BLOOM
and StarCoder aim to democratize access to pretrained models for collaborative
community development. However, such existing models face challenges: limited
multilingual capabilities, continual pretraining causing catastrophic
forgetting, whereas pretraining from scratch is computationally expensive, and
compliance with AI safety and development laws. This paper presents Aurora-M, a
15B parameter multilingual open-source model trained on English, Finnish,
Hindi, Japanese, Vietnamese, and code. Continually pretrained from
StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion
tokens in total training token count. It is the first open-source multilingual
model fine-tuned on human-reviewed safety instructions, thus aligning its
development not only with conventional red-teaming considerations, but also
with the specific concerns articulated in the Biden-Harris Executive Order on
the Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence. Aurora-M is rigorously evaluated across various tasks and
languages, demonstrating robustness against catastrophic forgetting and
outperforming alternatives in multilingual settings, particularly in safety
evaluations. To promote responsible open-source LLM development, Aurora-M and
its variants are released at
https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 .Summary
AI-Generated Summary