ChatPaper.aiChatPaper

突破探索瓶颈:基于量规支架的强化学习助力通用大语言模型推理

Breaking the Exploration Bottleneck: Rubric-Scaffolded Reinforcement Learning for General LLM Reasoning

August 23, 2025
作者: Yang Zhou, Sunzhu Li, Shunyu Liu, Wenkai Fang, Jiale Zhao, Jingwen Yang, Jianwei Lv, Kongcheng Zhang, Yihe Zhou, Hengtong Lu, Wei Chen, Yan Xie, Mingli Song
cs.AI

摘要

近期,大型语言模型(LLMs)的进展凸显了强化学习(RL)在促进推理能力涌现方面的潜力。尽管取得了令人鼓舞的成果,但一个根本性的困境依然存在:RL的改进依赖于从高质量样本中学习,而此类样本的探索却受限于LLMs固有的局限性。这实际上形成了一个不良循环,即无法探索的内容也就无法学习。在本研究中,我们提出了“准则支撑的强化学习”(Rubric-Scaffolded Reinforcement Learning, RuscaRL),这是一种新颖的教学支架框架,旨在打破通用LLM推理中的探索瓶颈。具体而言,RuscaRL引入了清单式准则作为(1)在生成过程中的显性探索支架,其中不同的准则作为任务指令中的外部指导,引导多样化的高质量响应。这种指导随着时间的推移逐渐减弱,鼓励模型内化潜在的推理模式;(2)在模型训练期间用于利用的可验证奖励,通过以准则为参考,我们能够获得稳健的LLM-as-a-Judge评分,从而在通用推理任务上实现有效的RL。大量实验证明了所提出的RuscaRL在各种基准测试中的优越性,在最佳N评估下有效扩展了推理边界。值得注意的是,RuscaRL显著提升了Qwen-2.5-7B-Instruct在HealthBench-500上的得分,从23.6提高到50.3,超越了GPT-4.1。此外,我们在Qwen3-30B-A3B-Instruct上微调的变体在HealthBench-500上达到了61.1分,表现优于包括OpenAI-o3在内的领先LLMs。
English
Recent advances in Large Language Models (LLMs) have underscored the potential of Reinforcement Learning (RL) to facilitate the emergence of reasoning capabilities. Despite the encouraging results, a fundamental dilemma persists as RL improvement relies on learning from high-quality samples, yet the exploration for such samples remains bounded by the inherent limitations of LLMs. This, in effect, creates an undesirable cycle in which what cannot be explored cannot be learned. In this work, we propose Rubric-Scaffolded Reinforcement Learning (RuscaRL), a novel instructional scaffolding framework designed to break the exploration bottleneck for general LLM reasoning. Specifically, RuscaRL introduces checklist-style rubrics as (1) explicit scaffolding for exploration during rollout generation, where different rubrics are provided as external guidance within task instructions to steer diverse high-quality responses. This guidance is gradually decayed over time, encouraging the model to internalize the underlying reasoning patterns; (2) verifiable rewards for exploitation during model training, where we can obtain robust LLM-as-a-Judge scores using rubrics as references, enabling effective RL on general reasoning tasks. Extensive experiments demonstrate the superiority of the proposed RuscaRL across various benchmarks, effectively expanding reasoning boundaries under the best-of-N evaluation. Notably, RuscaRL significantly boosts Qwen-2.5-7B-Instruct from 23.6 to 50.3 on HealthBench-500, surpassing GPT-4.1. Furthermore, our fine-tuned variant on Qwen3-30B-A3B-Instruct achieves 61.1 on HealthBench-500, outperforming leading LLMs including OpenAI-o3.
PDF171August 26, 2025