规范式评估揭示语言模型能力演进轨迹
Prescriptive Scaling Reveals the Evolution of Language Model Capabilities
February 17, 2026
作者: Hanlin Zhang, Jikai Jin, Vasilis Syrgkanis, Sham Kakade
cs.AI
摘要
在基础模型部署实践中,从业者日益需要规范化的缩放定律:在给定预训练计算预算的前提下,结合当代后训练技术,下游任务可达到的准确率是多少?该映射关系随着技术发展保持怎样的稳定性?通过对5千个观测样本和2千个新采样模型性能数据进行大规模观测评估,我们采用单调饱和S型参数化的平滑分位数回归方法,以预训练浮点运算次数的对数为自变量,估算出能力边界(即基准测试得分的条件高分位数)。通过在前代模型上拟合参数并在新一代模型上验证,我们证实了该方法的时间可靠性。跨任务分析显示,除数学推理任务的能力边界随时间持续提升外,其他任务的估计边界基本保持稳定。我们进一步扩展该方法,分析任务相关的饱和现象,并探究数学推理任务中数据污染相关的变化。最后提出一种高效算法,仅需约20%的评估预算即可重建近乎完整的数据边界。本研究同步发布最新模型性能评估数据集Proteus 2k,并建立了一套实用方法论:既可将计算预算转化为可靠的性能预期,又能监测能力边界随时间推移发生的变化。
English
For deploying foundation models, practitioners increasingly need prescriptive scaling laws: given a pre training compute budget, what downstream accuracy is attainable with contemporary post training practice, and how stable is that mapping as the field evolves? Using large scale observational evaluations with 5k observational and 2k newly sampled data on model performance, we estimate capability boundaries, high conditional quantiles of benchmark scores as a function of log pre training FLOPs, via smoothed quantile regression with a monotone, saturating sigmoid parameterization. We validate the temporal reliability by fitting on earlier model generations and evaluating on later releases. Across various tasks, the estimated boundaries are mostly stable, with the exception of math reasoning that exhibits a consistently advancing boundary over time. We then extend our approach to analyze task dependent saturation and to probe contamination related shifts on math reasoning tasks. Finally, we introduce an efficient algorithm that recovers near full data frontiers using roughly 20% of evaluation budget. Together, our work releases the Proteus 2k, the latest model performance evaluation dataset, and introduces a practical methodology for translating compute budgets into reliable performance expectations and for monitoring when capability boundaries shift across time.