地平线数学:通过自动验证衡量人工智能在数学发现领域的进展
HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification
March 16, 2026
作者: Erik Y. Wang, Sumeet Motwani, James V. Roggeveen, Eliot Hodges, Dulhan Jayalath, Charles London, Kalyan Ramakrishnan, Flaviu Cipcigan, Philip Torr, Alessandro Abate
cs.AI
摘要
人工智能能否在重要的未解数学问题上取得突破?当前大语言模型已具备复杂的数学与科学推理能力,但其是否能够开展创新性研究仍存在广泛争议且探索不足。我们推出HorizonMath基准测试,涵盖计算数学与应用数学8大领域的100多个未解难题,并配套开源评估框架以实现自动化验证。该基准聚焦于"发现困难但验证高效"的问题类型——这类问题需要深刻的数学洞察力,但验证过程计算效率高且方法简洁。由于所有问题均无现成答案,HorizonMath能有效避免数据污染,目前最先进模型的得分普遍接近0%。现有研究级基准依赖形式化证明验证或人工评审,两者均难以规模化扩展。通过该平台,我们发现GPT 5.4 Pro针对两个问题提出的解决方案优于已知最佳公开结果,可能构成数学文献中的创新贡献(待专家评审)。我们将HorizonMath作为开放挑战和持续增长的社区资源发布,对未解问题类的正确解答有望成为数学领域的新颖成果。
English
Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.