ChatPaper.aiChatPaper

星碼者:源碼與你同在!

StarCoder: may the source be with you!

May 9, 2023
作者: Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries
cs.AI

摘要

BigCode 社群是一個開放科學合作組織,致力於負責任地開發用於程式碼的大型語言模型(Code LLMs),並推出了 StarCoder 和 StarCoderBase:擁有 155 億參數模型、8K 上下文長度、填充功能以及透過多查詢注意力實現快速大批量推論。StarCoderBase 是在來自 The Stack 的 1 兆令牌上進行訓練的,The Stack 是一個包含大量許可證容許的 GitHub 存儲庫、檢查工具和可選退出流程的集合。我們在 350 億 Python 令牌上對 StarCoderBase 進行了微調,從而創建了 StarCoder。我們對迄今為止最全面的 Code LLMs 進行了評估,並展示了 StarCoderBase 優於每個支持多種程式語言的開放 Code LLM,並與 OpenAI code-cushman-001 模型匹敵或優於其表現。此外,StarCoder 優於每個在 Python 上進行微調的模型,可以被提示達到 40% 的 HumanEval 通過率,並且仍保持其在其他程式語言上的性能。我們採取了幾個重要步驟來實現安全的開放模型釋出,包括改進的個人身份信息遮蔽管道和一個新穎的歸因追蹤工具,並將 StarCoder 模型以 Open 負責任 AI 模型許可證的更具商業可行性版本公開發布。
English
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40\% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.
PDF313December 15, 2024