ChatPaper.aiChatPaper

RL-PLUS:通过混合策略优化应对大语言模型在强化学习中的能力边界崩溃问题

RL-PLUS: Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization

July 31, 2025
作者: Yihong Dong, Xue Jiang, Yongding Tao, Huanyu Liu, Kechi Zhang, Lili Mou, Rongyu Cao, Yingwei Ma, Jue Chen, Binhua Li, Zhi Jin, Fei Huang, Yongbin Li, Ge Li
cs.AI

摘要

基於可驗證獎勵的強化學習(RLVR)顯著提升了大型語言模型(LLMs)的複雜推理能力。然而,由於其本質上的在線策略特性,加之LLM龐大的動作空間和稀疏的獎勵機制,RLVR難以突破基礎LLM的固有能力邊界。關鍵在於,RLVR可能導致能力邊界崩潰,從而縮小LLM的問題解決範圍。為解決這一問題,我們提出了RL-PLUS,一種新穎的混合策略優化方法,旨在通過內部開發與外部數據的協同作用,實現更強的推理能力並超越基礎模型的邊界。RL-PLUS整合了兩個核心組件,即多重重要性抽樣以應對外部數據的分佈不匹配問題,以及基於探索的優勢函數來引導模型走向高價值、未探索的推理路徑。我們提供了理論分析和大量實驗,以證明我們方法的優越性和普適性。與現有的RLVR方法相比,RL-PLUS在六個數學推理基準測試中達到了1)最先進的性能;2)在六個分佈外推理任務中表現優異;3)在不同模型家族中實現了一致且顯著的增益,平均相對提升高達69.2%。此外,Pass@k曲線的分析表明,RL-PLUS有效解決了能力邊界崩潰的問題。
English
Reinforcement Learning with Verifiable Reward (RLVR) has significantly advanced the complex reasoning abilities of Large Language Models (LLMs). However, it struggles to break through the inherent capability boundaries of the base LLM, due to its essentially on-policy strategy coupled with LLM's immense action space and sparse reward. Critically, RLVR can lead to the capability boundary collapse, narrowing the LLM's problem-solving scope. To address this problem, we propose RL-PLUS, a novel hybrid-policy optimization approach for LLMs that synergizes internal exploitation with external data to achieve stronger reasoning capabilities and surpass the boundaries of base models. RL-PLUS integrates two core components, i.e., Multiple Importance Sampling to address distributional mismatch from external data, and Exploration-Based Advantage Function to guide the model towards high-value, unexplored reasoning paths. We provide both theoretical analysis and extensive experiments to demonstrate the superiority and generalizability of our approach. Compared with existing RLVR methods, RL-PLUS achieves 1) state-of-the-art performance on six math reasoning benchmarks; 2) superior performance on six out-of-distribution reasoning tasks; 3) consistent and significant gains across diverse model families, with average relative improvements up to 69.2\%. Moreover, the analysis of Pass@k curves indicates that RL-PLUS effectively resolves the capability boundary collapse problem.
PDF62August 7, 2025