反向縮放:當更大不一定更好。
Inverse Scaling: When Bigger Isn't Better
June 15, 2023
作者: Ian R. McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, Andrew Gritsevskiy, Daniel Wurgaft, Derik Kauffman, Gabriel Recchia, Jiacheng Liu, Joe Cavanagh, Max Weiss, Sicong Huang, The Floating Droid, Tom Tseng, Tomasz Korbak, Xudong Shen, Yuhui Zhang, Zhengping Zhou, Najoung Kim, Samuel R. Bowman, Ethan Perez
cs.AI
摘要
研究規模定律發現,大型語言模型(LMs)在規模增加(模型大小、訓練數據和計算)時,整體損失呈現可預測的改善。在這裡,我們提出證據支持一種主張,即LMs可能呈現逆向規模,或隨著規模增加,任務表現更差,例如由於訓練目標和數據存在缺陷。我們通過運行一個公開比賽收集的11個數據集,逆向規模獎,提供實證證據支持逆向規模的存在,並設立了一個可观的獎池。通過對數據集的分析,以及文獻中發現的其他例子,我們確定了逆向規模的四個潛在原因:(i)更傾向於重複記憶的序列而不是遵循上下文指示,(ii)模仿訓練數據中不良模式,(iii)任務包含一個易於分散注意力的任務,LMs可能專注於此而非更難的真實任務,以及(iv)正確但具有誤導性的少樣本示範任務。我們將獲獎數據集發布在https://inversescaling.com/data,以便進一步研究逆向規模。我們的任務有助於推動U形和倒U形規模趨勢的發現,其中初始趨勢逆轉,表明規模趨勢在預測更大規模模型行為方面不如先前理解的可靠。總的來說,我們的結果表明,有些任務僅通過增加模型規模本身可能不會帶來進展,需要更加慎重地思考用於訓練語言模型的數據和目標。
English
Work on scaling laws has found that large language models (LMs) show
predictable improvements to overall loss with increased scale (model size,
training data, and compute). Here, we present evidence for the claim that LMs
may show inverse scaling, or worse task performance with increased scale, e.g.,
due to flaws in the training objective and data. We present empirical evidence
of inverse scaling on 11 datasets collected by running a public contest, the
Inverse Scaling Prize, with a substantial prize pool. Through analysis of the
datasets, along with other examples found in the literature, we identify four
potential causes of inverse scaling: (i) preference to repeat memorized
sequences over following in-context instructions, (ii) imitation of undesirable
patterns in the training data, (iii) tasks containing an easy distractor task
which LMs could focus on, rather than the harder real task, and (iv) correct
but misleading few-shot demonstrations of the task. We release the winning
datasets at https://inversescaling.com/data to allow for further investigation
of inverse scaling. Our tasks have helped drive the discovery of U-shaped and
inverted-U scaling trends, where an initial trend reverses, suggesting that
scaling trends are less reliable at predicting the behavior of larger-scale
models than previously understood. Overall, our results suggest that there are
tasks for which increased model scale alone may not lead to progress, and that
more careful thought needs to go into the data and objectives for training
language models.