ChatPaper.aiChatPaper

YAYI 2:多語言開源大型語言模型

YAYI 2: Multilingual Open-Source Large Language Models

December 22, 2023
作者: Yin Luo, Qingchao Kong, Nan Xu, Jia Cao, Bao Hao, Baoyu Qu, Bo Chen, Chao Zhu, Chenyang Zhao, Donglei Zhang, Fan Feng, Feifei Zhao, Hailong Sun, Hanxuan Yang, Haojun Pan, Hongyu Liu, Jianbin Guo, Jiangtao Du, Jingyi Wang, Junfeng Li, Lei Sun, Liduo Liu, Lifeng Dong, Lili Liu, Lin Wang, Liwen Zhang, Minzheng Wang, Pin Wang, Ping Yu, Qingxiao Li, Rui Yan, Rui Zou, Ruiqun Li, Taiwen Huang, Xiaodong Wang, Xiaofei Wu, Xin Peng, Xina Zhang, Xing Fang, Xinglin Xiao, Yanni Hao, Yao Dong, Yigang Wang, Ying Liu, Yongyu Jiang, Yungan Wang, Yuqi Wang, Zhangsheng Wang, Zhaoxin Yu, Zhen Luo, Wenji Mao, Lei Wang, Dajun Zeng
cs.AI

摘要

作為自然語言處理的最新進展,大型語言模型(LLMs)在許多實際任務中已經達到了與人類相當的語言理解和生成能力,甚至被認為是通往人工通用智能的潛在途徑。為了更好地促進LLMs的研究,許多開源LLMs,如Llama 2和Falcon,最近被提出並獲得了與專有模型相當的性能。然而,這些模型主要設計用於英語情境,在中文情境中表現不佳。在這份技術報告中,我們提出了YAYI 2,包括基本模型和聊天模型,共有30億參數。YAYI 2是從頭開始在包含了我們預訓練數據處理流程篩選的2650億令牌的多語料庫上進行預訓練的。基本模型通過數百萬條指令的監督微調和從人類反饋中進行強化學習,與人類價值觀保持一致。在多個基準測試上進行了大量實驗,如MMLU和CMMLU,一致表明所提出的YAYI 2在性能上優於其他類似大小的開源模型。
English
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artificial general intelligence. To better facilitate research on LLMs, many open-source LLMs, such as Llama 2 and Falcon, have recently been proposed and gained comparable performances to proprietary models. However, these models are primarily designed for English scenarios and exhibit poor performances in Chinese contexts. In this technical report, we propose YAYI 2, including both base and chat models, with 30 billion parameters. YAYI 2 is pre-trained from scratch on a multilingual corpus which contains 2.65 trillion tokens filtered by our pre-training data processing pipeline. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback. Extensive experiments on multiple benchmarks, such as MMLU and CMMLU, consistently demonstrate that the proposed YAYI 2 outperforms other similar sized open-source models.
PDF151December 15, 2024