ChatPaper.aiChatPaper

基於情境感知的縮放法則預測任務表現

Predicting Task Performance with Context-aware Scaling Laws

October 16, 2025
作者: Kyle Montgomery, David Park, Jianhong Tu, Michael Bendersky, Beliz Gunel, Dawn Song, Chenguang Wang
cs.AI

摘要

尺度定律通过将上游指标(如交叉熵损失)与模型规模、训练数据和计算资源等设计因素联系起来,极大地改变了我们对大型语言模型的理解。然而,这些传统定律未能捕捉到下游任务的表现,其中上下文起着关键作用。在本研究中,我们提出了一个简洁且可解释的框架,该框架将下游表现联合建模为训练计算资源和所提供上下文的函数。我们通过对Llama-2-7B和Llama-2-13B的扩展上下文变体在65,500个独特实例上的下游表现进行拟合,实证验证了我们的框架,这些实例涵盖了算术推理、常识推理和机器翻译三项任务。我们的结果表明,该框架能够准确建模分布内的下游表现,在训练计算资源跨越三个数量级的情况下具有普适性,并能可靠地外推随着上下文量增加的性能表现。这些发现为训练计算资源与上下文利用之间的相互作用提供了宝贵的见解,为设计适用于多样化下游任务的更高效长上下文大型语言模型提供了指导。我们的代码可在https://github.com/wang-research-lab/context-scaling获取。
English
Scaling laws have transformed our understanding of large language models by linking upstream metrics like cross-entropy loss to design factors such as model size, training data, and compute. However, these conventional laws fail to capture downstream task performance, where context plays a critical role. In this work, we propose a straightforward, interpretable framework that jointly models downstream performance as a function of the training compute and the provided context. We empirically validate our framework by fitting it on the observed downstream performance of extended-context variants of Llama-2-7B and Llama-2-13B across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately models in-distribution downstream performance, generalizes across three orders of magnitude in training compute, and reliably extrapolates performance as the amount of context increases. These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our code is available at https://github.com/wang-research-lab/context-scaling.
PDF32October 17, 2025