ChatPaper.aiChatPaper

MATH-Beyond:一個推動強化學習超越基礎模型的基準測試

MATH-Beyond: A Benchmark for RL to Expand Beyond the Base Model

October 13, 2025
作者: Prasanna Mayilvahanan, Ricardo Dominguez-Olmedo, Thaddäus Wiedemer, Wieland Brendel
cs.AI

摘要

随着DeepSeek-R1的问世,一股新的强化学习(RL)方法浪潮涌现,似乎解锁了更强大的数学推理能力。然而,深入审视开源生态系统后,我们发现了一个关键局限:在足够多的采样次数下(例如,pass@1024),许多现有的基础模型已经能够解决广泛使用的数学基准测试(如MATH-500和AIME 2024)中的几乎所有问题。这表明,在大型语言模型(LLM)推理文献中盛行的RL微调方法,主要是对现有解题模式的精炼,而非发现全新的解题方式。这种精炼与RL更广泛的承诺——促进探索和获取新技能——形成了鲜明对比。为了突破这一瓶颈,我们引入了MATH-Beyond(MATH-B),这是一个特意构建的基准测试,旨在即使在大量采样预算下,也能击败参数规模高达8B的常见开源模型。通过RL提升在我们基准测试上的表现,需要那些能够在重复采样中超越基础模型能力进行推理的方法。由于问题选自DAPO-Math-17K和DeepScaleR数据集的子集,它们在主题上仍与标准高中数学保持一致。验证我们的假设,经过RL微调的模型,如Nemotron-Research-Reasoning-Qwen-1.5B和DeepScaleR-1.5B-Preview,在pass@1024下在MATH-B上表现不佳,显示了现有方法在处理更难题例时的不足。我们希望MATH-B能够催化探索驱动的RL方法,激发更深层次的推理能力。我们已在https://huggingface.co/datasets/brendel-group/MATH-Beyond发布了MATH-B。
English
With the advent of DeepSeek-R1, a new wave of reinforcement learning (RL) methods has emerged that seem to unlock stronger mathematical reasoning. However, a closer look at the open-source ecosystem reveals a critical limitation: with sufficiently many draws (e.g., pass@1024), many existing base models already solve nearly all questions on widely used math benchmarks such as MATH-500 and AIME 2024. This suggests that the RL fine-tuning methods prevalent in the LLM reasoning literature largely sharpen existing solution modes rather than discovering entirely new ones. Such sharpening stands in contrast to the broader promise of RL: to foster exploration and to acquire new skills. To move beyond this plateau, we introduce MATH-Beyond (MATH-B), a benchmark deliberately constructed to defeat common open-source models of up to 8B parameters even under large sampling budgets. Improving performance on our benchmark via RL requires methods that learn to reason in ways that go beyond base model capabilities in repeated sampling. Since the problems are drawn from subsets of DAPO-Math-17K and DeepScaleR datasets, they remain topically equivalent to standard high-school math. Validating our premise, RL fine-tuned models such as Nemotron-Research-Reasoning-Qwen-1.5B and DeepScaleR-1.5B-Preview perform poorly on MATH-B at pass@1024, showing how existing approaches fall short on tackling harder instances. We hope MATH-B will catalyze exploration-driven RL approaches that elicit deeper reasoning capabilities. We release MATH-B at https://huggingface.co/datasets/brendel-group/MATH-Beyond.
PDF12October 16, 2025