ChatPaper.aiChatPaper

觀察性規模定律與語言模型性能的可預測性

Observational Scaling Laws and the Predictability of Language Model Performance

May 17, 2024
作者: Yangjun Ruan, Chris J. Maddison, Tatsunori Hashimoto
cs.AI

摘要

了解語言模型在不同規模下的表現變化對於基準和演算法開發至關重要。縮放定律是建立這種理解的一種方法,但需要跨越許多不同規模訓練模型的要求限制了它們的使用。我們提出了一種替代的觀察方法,繞過模型訓練,而是從約80個公開可用模型中建立縮放定律。從多個模型家族中建立單一的縮放定律具有挑戰性,因為它們的訓練計算效率和能力存在很大變化。然而,我們展示這些變化與一個簡單的廣義縮放定律一致,其中語言模型的表現是低維能力空間的函數,而模型家族只在將訓練計算轉換為能力的效率上有所不同。利用這種方法,我們展示了複雜縮放現象的驚人可預測性:我們展示了幾個新興現象遵循平滑的S形行為並且可以從小模型預測;我們展示了諸如GPT-4等模型的代理性能可以從更簡單的非代理基準精確預測;我們展示了如何預測後訓練干預(如思維鏈和自洽性)對語言模型能力持續改進的影響。
English
Understanding how language model performance varies with scale is critical to benchmark and algorithm development. Scaling laws are one approach to building this understanding, but the requirement of training models across many different scales has limited their use. We propose an alternative, observational approach that bypasses model training and instead builds scaling laws from ~80 publically available models. Building a single scaling law from multiple model families is challenging due to large variations in their training compute efficiencies and capabilities. However, we show that these variations are consistent with a simple, generalized scaling law where language model performance is a function of a low-dimensional capability space, and model families only vary in their efficiency in converting training compute to capabilities. Using this approach, we show the surprising predictability of complex scaling phenomena: we show that several emergent phenomena follow a smooth, sigmoidal behavior and are predictable from small models; we show that the agent performance of models such as GPT-4 can be precisely predicted from simpler non-agentic benchmarks; and we show how to predict the impact of post-training interventions like Chain-of-Thought and Self-Consistency as language model capabilities continue to improve.

Summary

AI-Generated Summary

PDF141December 15, 2024