强化学习能否教会大语言模型长程推理?表达力是关键
Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key
May 7, 2026
作者: Tianle Wang, Zhaoyang Wang, Guangchen Lan, Xinpeng Wei, Sipeng Zhang, Guanwen Qiu, Abulhair Saparov
cs.AI
摘要
儘管強化學習(RL)已被應用於改進大語言模型(LLM)的推理能力,但由於缺乏可控、可擴展的環境,針對訓練規模如何隨任務難度變化的系統性研究一直受阻。為此,我們提出ScaleLogic——一個合成邏輯推理框架,能夠獨立控制兩個難度維度:所需證明規劃的深度(即決策視野)以及底層邏輯的表達能力。該框架支持從僅含蘊含關係的簡單邏輯(「若-則」)到更具表達力的一階推理(包含「與」「或」「非」及全稱量化「對於所有」)的廣泛邏輯體系。通過此框架,我們發現RL訓練計算量T隨推理深度D呈冪律關係(T ∝ D^γ, R² > 0.99),且縮放指數γ隨邏輯表達力的增強從1.04單調遞增至2.60。在數學與通用推理下游基準測試中,相較低表達力設定,高表達力訓練不僅帶來更大的性能提升(最高達+10.66分),還展現出更高效的計算遷移效率,這表明模型下游遷移效果不僅取決於訓練量,更與訓練內容密切相關。我們進一步驗證該冪律關係在多種RL方法中普遍成立,而基於課程學習的訓練能顯著提升縮放效率。
English
Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute T follows a power law with respect to reasoning depth D (T propto D^γ, R^{2} > 0.99), and that the scaling exponent γ increases monotonically with logical expressiveness, from 1.04 to 2.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.