观察性标度定律与语言模型性能的可预测性
Observational Scaling Laws and the Predictability of Language Model Performance
May 17, 2024
作者: Yangjun Ruan, Chris J. Maddison, Tatsunori Hashimoto
cs.AI
摘要
了解语言模型性能随规模变化的情况对基准和算法开发至关重要。缩放定律是建立这种理解的一种方法,但需要跨多个不同规模训练模型的要求限制了它们的使用。我们提出了一种替代的观察方法,绕过模型训练,而是从约80个公开可用模型中构建缩放定律。从多个模型系列构建单一缩放定律具有挑战性,因为它们的训练计算效率和能力存在很大变化。然而,我们展示了这些变化与一个简单的广义缩放定律一致,其中语言模型性能是低维能力空间的函数,而模型系列仅在将训练计算转化为能力的效率上有所不同。利用这种方法,我们展示了复杂缩放现象的惊人可预测性:我们展示了几种新兴现象遵循平滑的S形行为并且可以从小模型中预测;我们展示了诸如GPT-4等模型的代理性能可以从更简单的非代理基准精确预测;我们展示了如何预测后训练干预(如“思维链”和自一致性)对语言模型能力持续改进的影响。
English
Understanding how language model performance varies with scale is critical to
benchmark and algorithm development. Scaling laws are one approach to building
this understanding, but the requirement of training models across many
different scales has limited their use. We propose an alternative,
observational approach that bypasses model training and instead builds scaling
laws from ~80 publically available models. Building a single scaling law from
multiple model families is challenging due to large variations in their
training compute efficiencies and capabilities. However, we show that these
variations are consistent with a simple, generalized scaling law where language
model performance is a function of a low-dimensional capability space, and
model families only vary in their efficiency in converting training compute to
capabilities. Using this approach, we show the surprising predictability of
complex scaling phenomena: we show that several emergent phenomena follow a
smooth, sigmoidal behavior and are predictable from small models; we show that
the agent performance of models such as GPT-4 can be precisely predicted from
simpler non-agentic benchmarks; and we show how to predict the impact of
post-training interventions like Chain-of-Thought and Self-Consistency as
language model capabilities continue to improve.Summary
AI-Generated Summary