FastCuRL:基於漸進式上下文擴展的課程強化學習,用於高效訓練R1類推理模型
FastCuRL: Curriculum Reinforcement Learning with Progressive Context Extension for Efficient Training R1-like Reasoning Models
March 21, 2025
作者: Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan, Feng Zhang
cs.AI
摘要
本文提出了一種名為\textsc{FastCuRL}的簡潔高效課程強化學習方法,該方法結合了上下文窗口擴展策略,旨在加速類R1推理模型的強化學習訓練效率,同時提升其在處理具有長鏈推理邏輯的複雜任務時的表現,特別是在1.5B參數的語言模型上。\textsc{FastCuRL}包含兩個主要步驟:基於長度的訓練數據分段和上下文窗口擴展訓練。具體而言,前者首先根據輸入提示的長度將原始訓練數據劃分為三個不同層次,後者則利用分段後的訓練數據集,逐步增加上下文窗口長度來訓練推理模型。實驗結果表明,\textsc{FastCuRL}-1.5B-Preview在所有五個數據集(包括MATH 500、AIME 2024、AMC 2023、Minerva Math和OlympiadBench)上均超越了DeepScaleR-1.5B-Preview,且僅使用了50%的訓練步數。此外,FastCuRL-1.5B-Preview的所有訓練階段僅需單個配備8個GPU的節點即可完成。
English
In this paper, we propose \textsc{FastCuRL}, a simple yet efficient
Curriculum Reinforcement Learning approach with
context window extending strategy to accelerate the reinforcement learning
training efficiency for R1-like reasoning models while enhancing their
performance in tackling complex reasoning tasks with long chain-of-thought
rationales, particularly with a 1.5B parameter language model.
\textsc{FastCuRL} consists of two main procedures: length-aware
training data segmentation and context window extension training. Specifically,
the former first splits the original training data into three different levels
by the input prompt length, and then the latter leverages segmented training
datasets with a progressively increasing context window length to train the
reasoning model. Experimental results demonstrate that
\textsc{FastCuRL}-1.5B-Preview surpasses DeepScaleR-1.5B-Preview
across all five datasets (including MATH 500, AIME 2024, AMC 2023, Minerva
Math, and OlympiadBench) while only utilizing 50\% of training steps.
Furthermore, all training stages for FastCuRL-1.5B-Preview are completed using
just a single node with 8 GPUs.