Xmodel-LM 技術報告
Xmodel-LM Technical Report
June 5, 2024
作者: Yichuan Wang, Yang Liu, Yu Yan, Xucheng Huang, Ling Jiang
cs.AI
摘要
我們介紹了 Xmodel-LM,一個緊湊高效的 1.1B 語言模型,預先訓練了超過 2 兆個標記。Xmodel-LM 是在我們自建的數據集(Xdata)上訓練的,該數據集根據下游任務優化平衡了中文和英文語料庫。儘管體積較小,Xmodel-LM 展現出卓越的性能,顯著超越了現有規模相似的開源語言模型。我們的模型檢查點和代碼可在 GitHub 上公開訪問,網址為 https://github.com/XiaoduoAILab/XmodelLM。
English
We introduce Xmodel-LM, a compact and efficient 1.1B language model
pre-trained on over 2 trillion tokens. Trained on our self-built dataset
(Xdata), which balances Chinese and English corpora based on downstream task
optimization, Xmodel-LM exhibits remarkable performance despite its smaller
size. It notably surpasses existing open-source language models of similar
scale. Our model checkpoints and code are publicly accessible on GitHub at
https://github.com/XiaoduoAILab/XmodelLM.