ChatPaper.aiChatPaper

HLE认证:人类终极考试的系统性验证与结构化修订

HLE-Verified: A Systematic Verification and Structured Revision of Humanity's Last Exam

February 15, 2026
作者: Weiqi Zhai, Zhihai Wang, Jinghang Wang, Boyu Yang, Xiaogang Li, Xiang Xu, Bohan Wang, Peng Wang, Xingzhe Wu, Anfeng Li, Qiyuan Feng, Yuhao Zhou, Shoulin Han, Wenjie Luo, Yiyuan Li, Yaxuan Wang, Ruixian Luo, Guojie Lin, Peiyao Xiao, Chengliang Xu, Ben Wang, Zeyu Wang, Zichao Chen, Jianan Ye, Yijie Hu, Jialong Chen, Zongwen Shen, Yuliang Xu, An Yang, Bowen Yu, Dayiheng Liu, Junyang Lin, Hu Wei, Que Shen, Bing Zhao
cs.AI

摘要

人类终极考试(HLE)已成为评估前沿大语言模型在复杂多领域问题上表现的重要基准。然而社区分析指出,HLE中存在相当数量的噪声题目,可能扭曲评估结果与模型间比较。为应对此挑战,我们推出HLE-Verified——一个经过验证修订的HLE版本,具备透明验证流程与细粒度错误分类体系。该基准采用两阶段验证修复工作流构建:第一阶段通过领域专家评审与模型交叉核查,对每道题目的问题表述及参考答案进行二元验证,最终获得641道验证题目;第二阶段在严格保持原评估意图的前提下,通过双盲专家修复、模型辅助审计与终审裁定,将可修复的缺陷题目修订为1170道认证题目。其余689道题目则作为标注不确定集发布,明确标注不确定性来源与专业领域标签以供后续优化。我们在HLE与HLE-Verified上评估了七个前沿语言模型,发现在HLE-Verified上模型平均绝对准确率提升7-10个百分点。这种提升在原始题目表述或参考答案存在错误的题目上尤为显著,增幅达30-40个百分点。分析进一步表明,模型置信度与题目表述或参考答案的错误存在强关联,印证了修订的有效性。总体而言,HLE-Verified通过降低标注噪声,实现了对模型能力更精准的测量。数据详见:https://github.com/SKYLENAGE-AI/HLE-Verified
English
Humanity's Last Exam (HLE) has become a widely used benchmark for evaluating frontier large language models on challenging, multi-domain questions. However, community-led analyses have raised concerns that HLE contains a non-trivial number of noisy items, which can bias evaluation results and distort cross-model comparisons. To address this challenge, we introduce HLE-Verified, a verified and revised version of HLE with a transparent verification protocol and fine-grained error taxonomy. Our construction follows a two-stage validation-and-repair workflow resulting in a certified benchmark. In Stage I, each item undergoes binary validation of the problem and final answer through domain-expert review and model-based cross-checks, yielding 641 verified items. In Stage II, flawed but fixable items are revised under strict constraints preserving the original evaluation intent, through dual independent expert repairs, model-assisted auditing, and final adjudication, resulting in 1,170 revised-and-certified items. The remaining 689 items are released as a documented uncertain set with explicit uncertainty sources and expertise tags for future refinement. We evaluate seven state-of-the-art language models on HLE and HLE-Verified, observing an average absolute accuracy gain of 7--10 percentage points on HLE-Verified. The improvement is particularly pronounced on items where the original problem statement and/or reference answer is erroneous, with gains of 30--40 percentage points. Our analyses further reveal a strong association between model confidence and the presence of errors in the problem statement or reference answer, supporting the effectiveness of our revisions. Overall, HLE-Verified improves HLE-style evaluations by reducing annotation noise and enabling more faithful measurement of model capabilities. Data is available at: https://github.com/SKYLENAGE-AI/HLE-Verified
PDF11February 19, 2026