具有廉价推理能力的专业化语言模型,基于有限领域数据
Specialized Language Models with Cheap Inference from Limited Domain Data
February 2, 2024
作者: David Grangier, Angelos Katharopoulos, Pierre Ablin, Awni Hannun
cs.AI
摘要
大型语言模型已成为一种多功能工具,但在应用于缺乏大推理预算和大领域内训练集的任务时具有挑战性。本研究正式规范了这些约束条件,并区分了四个重要变量:预训练预算(用于在目标领域未知之前进行训练)、专业化预算(用于在目标领域已知之后进行训练)、推理预算和领域内训练集大小。在这些设置中,我们比较了机器学习文献中的不同方法。受推理成本限制,我们找到了比训练非常大的基本变压器模型的标准做法更好的替代方案。特别是,我们展示了超网络和专家混合模型对于大型预训练预算具有更好的困惑度,而在重要性采样数据集上训练的小型模型对于大型专业化预算是有吸引力的。
English
Large language models have emerged as a versatile tool but are challenging to
apply to tasks lacking large inference budgets and large in-domain training
sets. This work formalizes these constraints and distinguishes four important
variables: the pretraining budget (for training before the target domain is
known), the specialization budget (for training after the target domain is
known), the inference budget, and the in-domain training set size. Across these
settings, we compare different approaches from the machine learning literature.
Limited by inference cost, we find better alternatives to the standard practice
of training very large vanilla transformer models. In particular, we show that
hyper-networks and mixture of experts have better perplexity for large
pretraining budgets, while small models trained on importance sampled datasets
are attractive for large specialization budgets.