Xmodel-LM 技术报告
Xmodel-LM Technical Report
June 5, 2024
作者: Yichuan Wang, Yang Liu, Yu Yan, Xucheng Huang, Ling Jiang
cs.AI
摘要
我们介绍Xmodel-LM,这是一个紧凑高效的语言模型,预训练了超过2万亿个标记。在我们自建的数据集(Xdata)上进行训练,该数据集基于下游任务优化平衡了中文和英文语料库,Xmodel-LM表现出色,尽管体积较小。它显著超越了现有规模相似的开源语言模型。我们的模型检查点和代码可在GitHub上公开访问,网址为https://github.com/XiaoduoAILab/XmodelLM。
English
We introduce Xmodel-LM, a compact and efficient 1.1B language model
pre-trained on over 2 trillion tokens. Trained on our self-built dataset
(Xdata), which balances Chinese and English corpora based on downstream task
optimization, Xmodel-LM exhibits remarkable performance despite its smaller
size. It notably surpasses existing open-source language models of similar
scale. Our model checkpoints and code are publicly accessible on GitHub at
https://github.com/XiaoduoAILab/XmodelLM.Summary
AI-Generated Summary