规范式缩放揭示语言模型能力演进轨迹
Prescriptive Scaling Reveals the Evolution of Language Model Capabilities
February 17, 2026
作者: Hanlin Zhang, Jikai Jin, Vasilis Syrgkanis, Sham Kakade
cs.AI
摘要
在基础模型部署领域,从业者日益需要规范化的缩放定律:在给定预训练计算预算的前提下,结合当代后训练技术,下游任务可达到的准确率是多少?这种映射关系随着技术发展会保持多大稳定性?通过对模型性能进行大规模观测评估(包含5000个观测样本和2000个新采样数据),我们采用具有单调饱和S型参数化的平滑分位数回归方法,估算出能力边界——即基准分数随预训练浮点运算次数对数值变化的高条件分位数。通过在前代模型上进行拟合并在后续发布模型上验证,我们证实了该方法的时间可靠性。跨任务分析显示,除数学推理任务的能力边界随时间持续提升外,其他任务的估计边界基本保持稳定。我们进一步扩展该方法,分析任务相关的饱和现象,并探究数学推理任务中与数据污染相关的边界偏移。最后提出一种高效算法,仅需约20%的评估预算即可重建近乎完整的数据边界。本研究同步发布最新模型性能评估数据集Proteus 2k,并建立了一套实用方法论:既可将计算预算转化为可靠的性能预期,又能监测能力边界随时间推移发生的变化。
English
For deploying foundation models, practitioners increasingly need prescriptive scaling laws: given a pre training compute budget, what downstream accuracy is attainable with contemporary post training practice, and how stable is that mapping as the field evolves? Using large scale observational evaluations with 5k observational and 2k newly sampled data on model performance, we estimate capability boundaries, high conditional quantiles of benchmark scores as a function of log pre training FLOPs, via smoothed quantile regression with a monotone, saturating sigmoid parameterization. We validate the temporal reliability by fitting on earlier model generations and evaluating on later releases. Across various tasks, the estimated boundaries are mostly stable, with the exception of math reasoning that exhibits a consistently advancing boundary over time. We then extend our approach to analyze task dependent saturation and to probe contamination related shifts on math reasoning tasks. Finally, we introduce an efficient algorithm that recovers near full data frontiers using roughly 20% of evaluation budget. Together, our work releases the Proteus 2k, the latest model performance evaluation dataset, and introduces a practical methodology for translating compute budgets into reliable performance expectations and for monitoring when capability boundaries shift across time.