ChatPaper.aiChatPaper

組合式強化學習:為大型語言模型的強化學習構建可驗證提示

Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models

February 12, 2026
作者: Xin Xu, Clive Bai, Kai Yang, Tianhao Chen, Yangkun Chen, Weijie Liu, Hao Chen, Yang Wang, Saiyong Yang, Can Yang
cs.AI

摘要

大規模可驗證提示是強化學習與可驗證獎勵(RLVR)成功的基礎,但其中包含大量無信息量的樣本,且進一步擴充成本高昂。近期研究聚焦於更有效利用有限訓練數據,優先處理通過率為0的困難提示。然而隨著訓練推進,通過率為1的簡單提示也日益普遍,反而縮減了有效數據規模。為緩解此問題,我們提出Composition-RL——一種針對通過率為1提示的簡潔有效方法,能更好地利用有限的可驗證提示。具體而言,該方法自動將多個問題組合為新的可驗證題目,並將這些組合提示用於強化學習訓練。在4B至30B不同模型規模上的廣泛實驗表明,Composition-RL相較於原始數據集訓練的強化學習能持續提升推理能力。若採用逐步增加組合深度的課程學習變體,性能可進一步提升。此外,通過組合來自不同領域的提示,Composition-RL能實現更有效的跨領域強化學習。相關代碼、數據集與模型已開源於:https://github.com/XinXU-USTC/Composition-RL。
English
Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.
PDF811February 14, 2026