MAP-Neo:高性能且透明的雙語大型語言模型系列
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
May 29, 2024
作者: Ge Zhang, Scott Qu, Jiaheng Liu, Chenchen Zhang, Chenghua Lin, Chou Leuang Yu, Danny Pan, Esther Cheng, Jie Liu, Qunshu Lin, Raven Yuan, Tuney Zheng, Wei Pang, Xinrun Du, Yiming Liang, Yinghao Ma, Yizhi Li, Ziyang Ma, Bill Lin, Emmanouil Benetos, Huan Yang, Junting Zhou, Kaijing Ma, Minghao Liu, Morry Niu, Noah Wang, Quehry Que, Ruibo Liu, Sine Liu, Shawn Guo, Soren Gao, Wangchunshu Zhou, Xinyue Zhang, Yizhi Zhou, Yubo Wang, Yuelin Bai, Yuhan Zhang, Yuxiang Zhang, Zenith Wang, Zhenzhu Yang, Zijian Zhao, Jiajun Zhang, Wanli Ouyang, Wenhao Huang, Wenhu Chen
cs.AI
摘要
近年來,大型語言模型(LLMs)在不同任務上取得了前所未有的性能,取得了巨大進展。然而,由於商業利益,像GPT、Gemini和Claude等最具競爭力的模型被封閉在專有界面後,未公開訓練細節。最近,許多機構已經將幾個強大的LLMs(如LLaMA-3)開源,與現有的封閉式LLMs相媲美。然而,這些模型僅提供模型權重,大部分細節(例如中間檢查點、預訓練語料庫和訓練代碼等)未公開。為了提高LLMs的透明度,研究界已經開始開源真正開放的LLMs(如Pythia、Amber、OLMo),提供更多細節(例如預訓練語料庫和訓練代碼)。這些模型極大地推動了對這些大型模型的科學研究,包括它們的優勢、劣勢、偏見和風險。然而,我們觀察到,目前在推理、知識和編碼任務上的現有真正開放的LLMs仍遜於具有相似模型大小的現有最先進LLMs。為此,我們開源了MAP-Neo,這是一個具有高度能力和透明度的雙語語言模型,擁有從頭開始訓練的70億參數,在45億高質量標記上進行訓練。我們的MAP-Neo是第一個完全開源的雙語LLM,具有與現有最先進LLMs相媲美的性能。此外,我們開源了所有細節以重現我們的MAP-Neo,提供了經過清理的預訓練語料庫、數據清理流程、檢查點以及經過良好優化的訓練/評估框架。最後,我們希望我們的MAP-Neo將增強和加強開放研究社區,激發更多創新和創意,促進LLMs的進一步改進。
English
Large Language Models (LLMs) have made great strides in recent years to
achieve unprecedented performance across different tasks. However, due to
commercial interest, the most competitive models like GPT, Gemini, and Claude
have been gated behind proprietary interfaces without disclosing the training
details. Recently, many institutions have open-sourced several strong LLMs like
LLaMA-3, comparable to existing closed-source LLMs. However, only the model's
weights are provided with most details (e.g., intermediate checkpoints,
pre-training corpus, and training code, etc.) being undisclosed. To improve the
transparency of LLMs, the research community has formed to open-source truly
open LLMs (e.g., Pythia, Amber, OLMo), where more details (e.g., pre-training
corpus and training code) are being provided. These models have greatly
advanced the scientific study of these large models including their strengths,
weaknesses, biases and risks. However, we observe that the existing truly open
LLMs on reasoning, knowledge, and coding tasks are still inferior to existing
state-of-the-art LLMs with similar model sizes. To this end, we open-source
MAP-Neo, a highly capable and transparent bilingual language model with 7B
parameters trained from scratch on 4.5T high-quality tokens. Our MAP-Neo is the
first fully open-sourced bilingual LLM with comparable performance compared to
existing state-of-the-art LLMs. Moreover, we open-source all details to
reproduce our MAP-Neo, where the cleaned pre-training corpus, data cleaning
pipeline, checkpoints, and well-optimized training/evaluation framework are
provided. Finally, we hope our MAP-Neo will enhance and strengthen the open
research community and inspire more innovations and creativities to facilitate
the further improvements of LLMs.Summary
AI-Generated Summary