角度不言自明:通过模型自身信号解锁高效训练的强化学习
Angles Don't Lie: Unlocking Training-Efficient RL Through the Model's Own Signals
June 2, 2025
作者: Qinsi Wang, Jinghan Ke, Hancheng Ye, Yueqian Lin, Yuzhe Fu, Jianyi Zhang, Kurt Keutzer, Chenfeng Xu, Yiran Chen
cs.AI
摘要
当前针对大型语言模型(LLMs)的强化微调(RFT)范式,由于在均匀数据采样下重复暴露相同查询,存在样本效率低下的问题。尽管先前研究通过启发式难度指标探索了课程学习,但这些策略因忽视模型自身生成的内在学习信号而表现出局限性,导致训练方案不够优化。本文中,我们识别出一种称为角度集中度的模型内在信号,它有效反映了LLM从特定数据中学习的能力。我们从理论和实证上证明了词元隐藏状态向量的角度分布与生成梯度之间的相关性,揭示了模型对展现更高角度集中度的数据具有学习偏好。受此发现启发,我们提出了GAIN-RL,一个基于梯度驱动的角度信息导航强化学习框架。通过利用模型内在的角度集中度信号,GAIN-RL在每一轮训练中动态选择数据,确保梯度更新始终具有显著影响,从而大幅提升整体训练效率。实证评估显示,GAIN-RL(GRPO)在多样化的数学和编程任务及不同模型规模上实现了超过2.5倍的训练效率提升。此外,GAIN-RL(GRPO)的高效采样实现了数据高效训练,仅用一半原始数据即达到了比使用全部训练数据的标准GRPO更优的性能。代码已发布于https://github.com/wangqinsi1/GAINRL/tree/main。
English
Current Reinforcement Fine-tuning (RFT) paradigms for Large Language Models
(LLMs) suffer from sample inefficiency due to the redundant exposure of
identical queries under uniform data sampling. While previous work has explored
curriculum learning via heuristic difficulty metrics, these strategies exhibit
limitations by neglecting the intrinsic learning signals generated by the model
itself, thus leading to suboptimal training regimes. In this paper, we identify
a model-inherent signal termed angle concentration that effectively reflects an
LLM's capacity to learn from specific data. We theoretically and empirically
demonstrate a correlation between the angular distribution of token hidden
state vectors and the resulting gradient, revealing a learning preference for
data exhibiting higher angle concentration. Inspired by this finding, we
propose GAIN-RL, a Gradient-driven Angle-Informed Navigated RL framework. By
leveraging the model's intrinsic angle concentration signal, GAIN-RL
dynamically selects training data in each epoch, ensuring consistently
impactful gradient updates and thus significantly enhancing overall training
efficiency. Empirical evaluations show that GAIN-RL (GRPO) achieves over a 2.5x
acceleration in training efficiency across diverse mathematical and coding
tasks and varying model scales. Furthermore, GAIN-RL (GRPO)'s efficient
sampling yields data-efficient training, achieving better performance with half
the original data compared to vanilla GRPO with full training data. Code is
realsed at https://github.com/wangqinsi1/GAINRL/tree/main.Summary
AI-Generated Summary