基于上下文感知的扩展定律预测任务性能
Predicting Task Performance with Context-aware Scaling Laws
October 16, 2025
作者: Kyle Montgomery, David Park, Jianhong Tu, Michael Bendersky, Beliz Gunel, Dawn Song, Chenguang Wang
cs.AI
摘要
规模定律通过将上游指标(如交叉熵损失)与模型规模、训练数据和计算资源等设计因素联系起来,极大地改变了我们对大型语言模型的理解。然而,这些传统定律未能捕捉到下游任务的表现,其中上下文起着关键作用。在本研究中,我们提出了一个简单且可解释的框架,该框架将下游性能联合建模为训练计算量和所提供上下文的函数。我们通过在Llama-2-7B和Llama-2-13B的扩展上下文变体上,针对算术推理、常识推理和机器翻译三大任务中的65,500个独特实例进行实证验证,拟合了我们的框架。结果表明,我们的框架能够准确建模分布内的下游性能,在训练计算量跨越三个数量级时仍能保持泛化能力,并能可靠地外推随着上下文量增加的性能表现。这些发现为训练计算量与上下文利用之间的相互作用提供了宝贵的见解,为设计更高效的长上下文LLM以应对多样化的下游任务提供了指导。我们的代码可在https://github.com/wang-research-lab/context-scaling获取。
English
Scaling laws have transformed our understanding of large language models by
linking upstream metrics like cross-entropy loss to design factors such as
model size, training data, and compute. However, these conventional laws fail
to capture downstream task performance, where context plays a critical role. In
this work, we propose a straightforward, interpretable framework that jointly
models downstream performance as a function of the training compute and the
provided context. We empirically validate our framework by fitting it on the
observed downstream performance of extended-context variants of Llama-2-7B and
Llama-2-13B across 65,500 unique instances spanning three tasks: arithmetic
reasoning, common sense reasoning, and machine translation. Our results
demonstrate that our framework accurately models in-distribution downstream
performance, generalizes across three orders of magnitude in training compute,
and reliably extrapolates performance as the amount of context increases. These
findings offer valuable insights into the interplay between training compute
and context utilization, providing guidance for designing more efficient
long-context LLMs for diverse downstream tasks. Our code is available at
https://github.com/wang-research-lab/context-scaling.