ChatPaper.aiChatPaper

基准测试^2:系统性评估大语言模型基准

Benchmark^2: Systematic Evaluation of LLM Benchmarks

January 7, 2026
作者: Qi Qian, Chengsong Huang, Jingwen Xu, Changze Lv, Muling Wu, Wenhao Liu, Xiaohua Wang, Zhenghua Wang, Zisu Huang, Muzhao Tian, Jianhan Xu, Kun Hu, He-Da Wang, Yao Hu, Xuanjing Huang, Xiaoqing Zheng
cs.AI

摘要

大型语言模型评估基准的快速激增,亟需建立系统性方法来评估基准本身的质量。我们提出Benchmark²框架,该综合框架包含三项互补指标:(1)跨基准排名一致性,衡量基准是否产生与同类基准相符的模型排名;(2)区分度评分,量化基准区分不同模型的能力;(3)能力对齐偏差,用于识别同一模型家族中强模型失败而弱模型成功的异常实例。我们在涵盖数学、推理和知识领域的15个基准上开展大规模实验,评估了四个模型家族的11个大型语言模型。分析表明现有基准存在显著的质量差异,并证明基于我们指标的精选基准构建方案,能够以大幅缩减的测试集实现相当的评估效能。
English
The rapid proliferation of benchmarks for evaluating large language models (LLMs) has created an urgent need for systematic methods to assess benchmark quality itself. We propose Benchmark^2, a comprehensive framework comprising three complementary metrics: (1) Cross-Benchmark Ranking Consistency, measuring whether a benchmark produces model rankings aligned with peer benchmarks; (2) Discriminability Score, quantifying a benchmark's ability to differentiate between models; and (3) Capability Alignment Deviation, identifying problematic instances where stronger models fail but weaker models succeed within the same model family. We conduct extensive experiments across 15 benchmarks spanning mathematics, reasoning, and knowledge domains, evaluating 11 LLMs across four model families. Our analysis reveals significant quality variations among existing benchmarks and demonstrates that selective benchmark construction based on our metrics can achieve comparable evaluation performance with substantially reduced test sets.
PDF282January 9, 2026