ChatPaper.aiChatPaper

AMO-Bench:大型语言模型在中学数学竞赛中仍显乏力

AMO-Bench: Large Language Models Still Struggle in High School Math Competitions

October 30, 2025
作者: Shengnan An, Xunliang Cai, Xuezhi Cao, Xiaoyu Li, Yehao Lin, Junlin Liu, Xinxuan Lv, Dan Ma, Xuanlin Wang, Ziwen Wang, Shuang Zhou
cs.AI

摘要

我们推出AMO-Bench——一个达到奥林匹克竞赛甚至更高难度的进阶数学推理基准,包含50道人工设计的题目。现有基准普遍采用高中数学竞赛来评估大语言模型的数学推理能力,然而由于性能饱和现象(如AIME24/25),许多现有数学竞赛对顶尖大语言模型的评估效能逐渐减弱。为此,AMO-Bench通过双重机制构建更具挑战性的测试集:所有50道题目(1)均经过专家交叉验证,确保达到国际数学奥林匹克竞赛难度标准;(2)全部为原创题目,避免数据记忆导致的性能泄露。此外,AMO-Bench每道题仅需最终答案而非证明过程,支持自动化的稳健评分。在26个大语言模型上的实验表明,即便最优模型在AMO-Bench上也仅达到52.4%的准确率,多数模型低于40%。除表现不佳外,我们进一步发现测试阶段计算量增加时存在显著的规模扩展趋势。这些结果揭示了当前大语言模型在数学推理能力上的巨大提升空间。我们开源AMO-Bench以推动语言模型推理能力的前沿研究。 https://amo-bench.github.io/
English
We present AMO-Bench, an Advanced Mathematical reasoning benchmark with Olympiad level or even higher difficulty, comprising 50 human-crafted problems. Existing benchmarks have widely leveraged high school math competitions for evaluating mathematical reasoning capabilities of large language models (LLMs). However, many existing math competitions are becoming less effective for assessing top-tier LLMs due to performance saturation (e.g., AIME24/25). To address this, AMO-Bench introduces more rigorous challenges by ensuring all 50 problems are (1) cross-validated by experts to meet at least the International Mathematical Olympiad (IMO) difficulty standards, and (2) entirely original problems to prevent potential performance leakages from data memorization. Moreover, each problem in AMO-Bench requires only a final answer rather than a proof, enabling automatic and robust grading for evaluation. Experimental results across 26 LLMs on AMO-Bench show that even the best-performing model achieves only 52.4% accuracy on AMO-Bench, with most LLMs scoring below 40%. Beyond these poor performances, our further analysis reveals a promising scaling trend with increasing test-time compute on AMO-Bench. These results highlight the significant room for improving the mathematical reasoning in current LLMs. We release AMO-Bench to facilitate further research into advancing the reasoning abilities of language models. https://amo-bench.github.io/
PDF331December 2, 2025