Hard2Verify:面向开放前沿数学的步骤级验证基准
Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math
October 15, 2025
作者: Shrey Pandit, Austin Xu, Xuan-Phi Nguyen, Yifei Ming, Caiming Xiong, Shafiq Joty
cs.AI
摘要
基于大语言模型(LLM)的推理系统近期在国际数学奥林匹克(IMO)2025竞赛中斩获金牌级表现,其撰写的数学证明要求每一步不仅正确,还需充分论证以获得满分。要在如此具挑战性且开放式的场景下训练基于LLM的推理器,具备捕捉步骤级错误能力的强大验证器是必不可少的先决条件。我们推出了Hard2Verify,这是一个经过人工标注、耗时超过500小时构建的步骤级验证基准。Hard2Verify旨在严格评估处于前沿的步骤级验证器:验证器需为前沿LLM针对最新、极具挑战性的开放式数学问题生成的回答提供步骤级标注,或识别出首个错误。我们评估了29个生成式批评器与过程奖励模型,结果表明,除少数佼佼者外,开源验证器普遍落后于闭源模型。随后,我们深入分析了导致步骤级验证表现不佳的原因、验证器计算规模扩展的影响,以及自我验证与验证-生成动态等基础性问题。
English
Large language model (LLM)-based reasoning systems have recently achieved
gold medal-level performance in the IMO 2025 competition, writing mathematical
proofs where, to receive full credit, each step must be not only correct but
also sufficiently supported. To train LLM-based reasoners in such challenging,
open-ended settings, strong verifiers capable of catching step-level mistakes
are necessary prerequisites. We introduce Hard2Verify, a human-annotated,
step-level verification benchmark produced with over 500 hours of human labor.
Hard2Verify is designed to rigorously assess step-level verifiers at the
frontier: Verifiers must provide step-level annotations or identify the first
error in responses generated by frontier LLMs for very recent, challenging, and
open-ended math questions. We evaluate 29 generative critics and process reward
models, demonstrating that, beyond a few standouts, open-source verifiers lag
closed source models. We subsequently analyze what drives poor performance in
step-level verification, the impacts of scaling verifier compute, as well as
fundamental questions such as self-verification and verification-generation
dynamics.