ChatPaper.aiChatPaper

重新思考微型語言模型的優化和架構

Rethinking Optimization and Architecture for Tiny Language Models

February 5, 2024
作者: Yehui Tang, Fangcheng Liu, Yunsheng Ni, Yuchuan Tian, Zheyuan Bai, Yi-Qi Hu, Sichao Liu, Shangling Jui, Kai Han, Yunhe Wang
cs.AI

摘要

大型語言模型(LLMs)的強大能力已通過眾多數據和計算資源得到證明。然而,在移動設備上應用語言模型面臨著計算和記憶成本的巨大挑戰,即迫切需要高性能的微小語言模型。受高度複雜的訓練過程的限制,對於優化語言模型的許多細節很少受到仔細研究。在這項研究中,基於具有10億參數的微小語言模型,我們精心設計了一系列實證研究,以分析每個組件的影響。主要討論了三個觀點,即神經架構、參數初始化和優化策略。幾個設計公式在實證中被證明對微小語言模型特別有效,包括分詞器壓縮、架構調整、參數繼承和多輪訓練。然後,我們在1.6T多語種語料庫上訓練了PanGu-pi-1B Pro和PanGu-pi-1.5B Pro,遵循已確立的公式。實驗結果表明,改進的優化和架構使PanGu-pi-1B Pro在基準評估集上實現了明顯的平均改進,為8.87。此外,PanGu-pi-1.5B Pro超越了一系列具有更大模型尺寸的SOTA模型,驗證了其卓越性能。代碼將很快發布(https://github.com/YuchuanTian/RethinkTinyLM)。
English
The power of large language models (LLMs) has been demonstrated through numerous data and computing resources. However, the application of language models on mobile devices is facing huge challenge on the computation and memory costs, that is, tiny language models with high performance are urgently required. Limited by the highly complex training process, there are many details for optimizing language models that are seldom studied carefully. In this study, based on a tiny language model with 1B parameters, we carefully design a series of empirical study to analyze the effect of each component. Three perspectives are mainly discussed, i.e., neural architecture, parameter initialization, and optimization strategy. Several design formulas are empirically proved especially effective for tiny language models, including tokenizer compression, architecture tweaking, parameter inheritance and multiple-round training. Then we train PanGu-pi-1B Pro and PanGu-pi-1.5B Pro on 1.6T multilingual corpora, following the established formulas. Experimental results demonstrate the improved optimization and architecture yield a notable average improvement of 8.87 on benchmark evaluation sets for PanGu-pi-1B Pro. Besides, PanGu-pi-1.5B Pro surpasses a range of SOTA models with larger model sizes, validating its superior performance. The code will be released soon (https://github.com/YuchuanTian/RethinkTinyLM).
PDF131December 15, 2024