StarCoder 2 和 The Stack v2:下一代
StarCoder 2 and The Stack v2: The Next Generation
February 29, 2024
作者: Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
cs.AI
摘要
BigCode项目是一个开放的科学合作项目,专注于负责任地开发用于代码的大型语言模型(Code LLMs),介绍了StarCoder2。我们与Software Heritage(SWH)合作,在他们的源代码存档的数字共享资源之上构建了The Stack v2。除了SWH存储库涵盖的619种编程语言之外,我们还精心选择其他高质量的数据源,如GitHub拉取请求、Kaggle笔记本和代码文档。这导致了一个训练集,比第一个StarCoder数据集大4倍。我们使用3.3到4.3万亿标记对StarCoder2模型进行了3B、7B和15B参数的训练,并在一套全面的Code LLM基准测试中进行了彻底评估。我们发现我们的小型模型StarCoder2-3B在大多数基准测试中优于其他相似规模的Code LLM,并且也优于StarCoderBase-15B。我们的大型模型StarCoder2-15B在性能上明显优于其他相近规模的模型。此外,它与CodeLlama-34B相匹配或优于后者,后者是其两倍大的模型。虽然DeepSeekCoder-33B是高资源语言代码补全的表现最佳模型,但我们发现StarCoder2-15B在数学和代码推理基准测试以及一些低资源语言上的表现优于它。我们通过OpenRAIL许可证提供模型权重,并通过发布源代码数据的SoftWare Heritage持久标识符(SWHIDs)确保了对训练数据的完全透明。
English
The BigCode project, an open-scientific collaboration focused on the
responsible development of Large Language Models for Code (Code LLMs),
introduces StarCoder2. In partnership with Software Heritage (SWH), we build
The Stack v2 on top of the digital commons of their source code archive.
Alongside the SWH repositories spanning 619 programming languages, we carefully
select other high-quality data sources, such as GitHub pull requests, Kaggle
notebooks, and code documentation. This results in a training set that is 4x
larger than the first StarCoder dataset. We train StarCoder2 models with 3B,
7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate
them on a comprehensive set of Code LLM benchmarks. We find that our small
model, StarCoder2-3B, outperforms other Code LLMs of similar size on most
benchmarks, and also outperforms StarCoderBase-15B. Our large model,
StarCoder2- 15B, significantly outperforms other models of comparable size. In
addition, it matches or outperforms CodeLlama-34B, a model more than twice its
size. Although DeepSeekCoder- 33B is the best-performing model at code
completion for high-resource languages, we find that StarCoder2-15B outperforms
it on math and code reasoning benchmarks, as well as several low-resource
languages. We make the model weights available under an OpenRAIL license and
ensure full transparency regarding the training data by releasing the SoftWare
Heritage persistent IDentifiers (SWHIDs) of the source code data.