组合式强化学习:为大型语言模型构建可验证提示的强化学习框架
Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models
February 12, 2026
作者: Xin Xu, Clive Bai, Kai Yang, Tianhao Chen, Yangkun Chen, Weijie Liu, Hao Chen, Yang Wang, Saiyong Yang, Can Yang
cs.AI
摘要
大规模可验证提示是强化学习与可验证奖励(RLVR)成功的关键,但这些提示包含大量无信息量的样本且扩展成本高昂。近期研究聚焦于通过优先处理通过率为0的困难提示来更高效利用有限训练数据。然而随着训练推进,通过率为1的简单提示也日益普遍,反而降低了有效数据规模。为此我们提出Composition-RL——一种针对通过率1提示的简易有效方法,旨在更好地利用有限的可验证提示。具体而言,该方法自动将多个问题组合成新的可验证问题,并将这些组合提示用于强化学习训练。在4B至30B不同模型规模上的大量实验表明,Composition-RL能持续提升基于原始数据集训练的RL模型的推理能力。通过采用逐步增加组合深度的课程学习变体,性能可得到进一步强化。此外,该方法还能通过组合来自不同领域的提示实现更有效的跨领域强化学习。代码、数据集及模型已发布于https://github.com/XinXU-USTC/Composition-RL。
English
Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.