ChatPaper.aiChatPaper

探索數學推理學習中結果獎勵的極限

Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

February 10, 2025
作者: Chengqi Lyu, Songyang Gao, Yuzhe Gu, Wenwei Zhang, Jianfei Gao, Kuikun Liu, Ziyi Wang, Shuaibin Li, Qian Zhao, Haian Huang, Weihan Cao, Jiangning Liu, Hongwei Liu, Junnan Liu, Songyang Zhang, Dahua Lin, Kai Chen
cs.AI

摘要

推理能力,尤其是解決複雜數學問題的能力,是智能的重要組成部分。像是OpenAI的o系列模型等專有公司最近在推理任務上取得了顯著進展。然而,完整的技術細節仍未公開,目前被認為採用的技術僅有強化學習(RL)和長串思維。本文提出了一個新的RL框架,稱為OREAL,旨在通過基於結果獎勵的強化學習來追求在數學推理任務中可以達到的性能極限,其中僅可輕易獲取二元結果獎勵。我們在理論上證明,從性能最佳化的KL正則化策略可以透過從性能最佳化的KL正則化策略在二元反饋環境中從最佳N(BoN)採樣的正軌跡進行行為克隆就足以學習。這種形式進一步意味著負樣本的獎勵應重新塑造,以確保正負樣本之間的梯度一致性。為了減輕RL中由稀疏獎勵帶來的長期困難,這些困難甚至被用於推理任務的長串思維的部分正確性所加劇,我們進一步應用了一個基於令牌級獎勵模型來採樣推理軌跡中的重要令牌進行學習。通過OREAL,第一次,一個7B模型可以在MATH-500上通過RL獲得94.0的pass@1準確率,與32B模型不相上下。OREAL-32B還超越了以蒸餾方式訓練的先前32B模型,在MATH-500上以95.0的pass@1準確率。我們的研究還表明了RL中初始策略模型和訓練查詢的重要性。代碼、模型和數據將被釋出以造福未來的研究。
English
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts. This paper proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through Outcome REwArd-based reinforcement Learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible. We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments. This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples. To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning. With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our investigation also indicates the importance of initial policy models and training queries for RL. Code, models, and data will be released to benefit future researchhttps://github.com/InternLM/OREAL.

Summary

AI-Generated Summary

PDF616February 11, 2025