ChatPaper.aiChatPaper

YAYI 2:多语言开源大型语言模型

YAYI 2: Multilingual Open-Source Large Language Models

December 22, 2023
作者: Yin Luo, Qingchao Kong, Nan Xu, Jia Cao, Bao Hao, Baoyu Qu, Bo Chen, Chao Zhu, Chenyang Zhao, Donglei Zhang, Fan Feng, Feifei Zhao, Hailong Sun, Hanxuan Yang, Haojun Pan, Hongyu Liu, Jianbin Guo, Jiangtao Du, Jingyi Wang, Junfeng Li, Lei Sun, Liduo Liu, Lifeng Dong, Lili Liu, Lin Wang, Liwen Zhang, Minzheng Wang, Pin Wang, Ping Yu, Qingxiao Li, Rui Yan, Rui Zou, Ruiqun Li, Taiwen Huang, Xiaodong Wang, Xiaofei Wu, Xin Peng, Xina Zhang, Xing Fang, Xinglin Xiao, Yanni Hao, Yao Dong, Yigang Wang, Ying Liu, Yongyu Jiang, Yungan Wang, Yuqi Wang, Zhangsheng Wang, Zhaoxin Yu, Zhen Luo, Wenji Mao, Lei Wang, Dajun Zeng
cs.AI

摘要

随着自然语言处理的最新进展,大型语言模型(LLMs)在许多现实任务中已经实现了人类水平的语言理解和生成能力,甚至被视为通往人工通用智能的潜在途径。为了更好地促进LLMs的研究,许多开源LLMs,如Llama 2和Falcon,最近被提出并获得了与专有模型相媲美的性能。然而,这些模型主要设计用于英语场景,在中文环境中表现不佳。在这份技术报告中,我们提出了YAYI 2,包括基础模型和聊天模型,共有30亿参数。YAYI 2是从头开始在一个包含了通过我们的预训练数据处理流程筛选出的2.65万亿标记的多语言语料库上进行预训练的。基础模型通过数百万条指令的监督微调和来自人类反馈的强化学习与人类价值观保持一致。在多个基准测试中进行的大量实验,如MMLU和CMMLU,一致表明所提出的YAYI 2在性能上优于其他类似规模的开源模型。
English
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artificial general intelligence. To better facilitate research on LLMs, many open-source LLMs, such as Llama 2 and Falcon, have recently been proposed and gained comparable performances to proprietary models. However, these models are primarily designed for English scenarios and exhibit poor performances in Chinese contexts. In this technical report, we propose YAYI 2, including both base and chat models, with 30 billion parameters. YAYI 2 is pre-trained from scratch on a multilingual corpus which contains 2.65 trillion tokens filtered by our pre-training data processing pipeline. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback. Extensive experiments on multiple benchmarks, such as MMLU and CMMLU, consistently demonstrate that the proposed YAYI 2 outperforms other similar sized open-source models.
PDF151December 15, 2024