ChatPaper.aiChatPaper

强化学习能否教会大模型长程推理?表达能力是关键

Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key

May 7, 2026
作者: Tianle Wang, Zhaoyang Wang, Guangchen Lan, Xinpeng Wei, Sipeng Zhang, Guanwen Qiu, Abulhair Saparov
cs.AI

摘要

强化学习(RL)已被应用于提升大语言模型(LLM)的推理能力,但由于缺乏可控且可扩展的环境,关于训练如何随任务难度扩展的系统性研究一直受阻。我们提出ScaleLogic——一个可合成逻辑推理的框架,该框架能独立控制两个难度维度:所需证明规划的深度(即决策步长)和底层逻辑的表达能力。我们提出的框架支持从仅含蕴含关系的简单逻辑(“如果-那么”)到更具表达力的一阶推理(包含“与”“或”“非”及全称量词)的广泛逻辑类型。通过该框架,我们发现RL训练计算量T与推理深度D之间遵循幂律关系(T ∝ D^γ, R² > 0.99),且缩放指数γ随逻辑表达力的增强从1.04单调递增至2.60。在数学与通用推理下游任务中,相较于低表达力训练设置,高表达力训练不仅带来更大的性能提升(最高达+10.66分),还展现出更高的计算效率迁移,这表明模型下游迁移效果既受训练量影响,更取决于训练内容本身。我们进一步验证了该幂律关系在多种RL方法中普遍成立,而基于课程学习的训练能显著提升缩放效率。
English
Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute T follows a power law with respect to reasoning depth D (T propto D^γ, R^{2} > 0.99), and that the scaling exponent γ increases monotonically with logical expressiveness, from 1.04 to 2.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.
PDF83May 9, 2026