Hard2Verify:開放式前沿數學的步驟級驗證基準
Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math
October 15, 2025
作者: Shrey Pandit, Austin Xu, Xuan-Phi Nguyen, Yifei Ming, Caiming Xiong, Shafiq Joty
cs.AI
摘要
基於大型語言模型(LLM)的推理系統近期在2025年國際數學奧林匹克(IMO)競賽中達到了金牌級別的表現,能夠撰寫數學證明,其中每一步不僅需要正確,還必須得到充分的支持才能獲得滿分。要在這種具有挑戰性且開放式的環境中訓練基於LLM的推理系統,具備捕捉步驟級錯誤能力的強大驗證器是必要的前提。我們引入了Hard2Verify,這是一個經過人工註釋的步驟級驗證基準,耗費了超過500小時的人力資源。Hard2Verify旨在嚴格評估前沿的步驟級驗證器:驗證器必須提供步驟級註釋,或識別由前沿LLM生成的針對近期、具挑戰性且開放式數學問題的回應中的第一個錯誤。我們評估了29個生成式批評器和過程獎勵模型,結果顯示,除少數表現突出者外,開源驗證器落後於閉源模型。我們隨後分析了導致步驟級驗證性能不佳的原因、驗證器計算規模擴展的影響,以及自我驗證和驗證-生成動態等基本問題。
English
Large language model (LLM)-based reasoning systems have recently achieved
gold medal-level performance in the IMO 2025 competition, writing mathematical
proofs where, to receive full credit, each step must be not only correct but
also sufficiently supported. To train LLM-based reasoners in such challenging,
open-ended settings, strong verifiers capable of catching step-level mistakes
are necessary prerequisites. We introduce Hard2Verify, a human-annotated,
step-level verification benchmark produced with over 500 hours of human labor.
Hard2Verify is designed to rigorously assess step-level verifiers at the
frontier: Verifiers must provide step-level annotations or identify the first
error in responses generated by frontier LLMs for very recent, challenging, and
open-ended math questions. We evaluate 29 generative critics and process reward
models, demonstrating that, beyond a few standouts, open-source verifiers lag
closed source models. We subsequently analyze what drives poor performance in
step-level verification, the impacts of scaling verifier compute, as well as
fundamental questions such as self-verification and verification-generation
dynamics.