ChatPaper.aiChatPaper

重新思考微型语言模型的优化和架构

Rethinking Optimization and Architecture for Tiny Language Models

February 5, 2024
作者: Yehui Tang, Fangcheng Liu, Yunsheng Ni, Yuchuan Tian, Zheyuan Bai, Yi-Qi Hu, Sichao Liu, Shangling Jui, Kai Han, Yunhe Wang
cs.AI

摘要

大型语言模型(LLMs)的强大能力已通过大量数据和计算资源得到证明。然而,在移动设备上应用语言模型面临着计算和内存成本的巨大挑战,即迫切需要高性能的微型语言模型。受高度复杂的训练过程限制,优化语言模型的许多细节很少受到认真研究。在这项研究中,基于一个拥有10亿参数的微型语言模型,我们精心设计了一系列经验研究,以分析每个组件的影响。主要讨论了三个视角,即神经架构、参数初始化和优化策略。几个设计公式在经验上被证明对微型语言模型特别有效,包括分词器压缩、架构微调、参数继承和多轮训练。然后,我们在1.6T多语种语料库上训练了PanGu-pi-1B Pro和PanGu-pi-1.5B Pro,遵循已建立的公式。实验结果表明,改进的优化和架构使PanGu-pi-1B Pro在基准评估集上平均提升了8.87。此外,PanGu-pi-1.5B Pro超越了一系列具有更大模型尺寸的SOTA模型,验证了其卓越性能。代码将很快发布(https://github.com/YuchuanTian/RethinkTinyLM)。
English
The power of large language models (LLMs) has been demonstrated through numerous data and computing resources. However, the application of language models on mobile devices is facing huge challenge on the computation and memory costs, that is, tiny language models with high performance are urgently required. Limited by the highly complex training process, there are many details for optimizing language models that are seldom studied carefully. In this study, based on a tiny language model with 1B parameters, we carefully design a series of empirical study to analyze the effect of each component. Three perspectives are mainly discussed, i.e., neural architecture, parameter initialization, and optimization strategy. Several design formulas are empirically proved especially effective for tiny language models, including tokenizer compression, architecture tweaking, parameter inheritance and multiple-round training. Then we train PanGu-pi-1B Pro and PanGu-pi-1.5B Pro on 1.6T multilingual corpora, following the established formulas. Experimental results demonstrate the improved optimization and architecture yield a notable average improvement of 8.87 on benchmark evaluation sets for PanGu-pi-1B Pro. Besides, PanGu-pi-1.5B Pro surpasses a range of SOTA models with larger model sizes, validating its superior performance. The code will be released soon (https://github.com/YuchuanTian/RethinkTinyLM).
PDF131December 15, 2024