ChatPaper.aiChatPaper

GRPO-MA:GRPO中的多答案生成,实现稳定高效的思维链训练

GRPO-MA: Multi-Answer Generation in GRPO for Stable and Efficient Chain-of-Thought Training

September 29, 2025
作者: Hongcheng Wang, Yinuo Huang, Sukai Wang, Guanghui Ren, Hao Dong
cs.AI

摘要

近期进展,如DeepSeek-R1所示,GRPO算法作为一种强化学习(RL)方法,能够有效训练大语言模型(LLMs)和视觉语言模型(VLMs)中的思维链(CoT)推理。本文中,我们分析了GRPO面临的三大挑战:思维与答案间的梯度耦合、有限并行采样导致的稀疏奖励信号,以及不稳定的优势估计。为应对这些挑战,我们提出了GRPO-MA,这是一种简单却理论扎实的方法,它利用每个思维过程生成多答案,从而实现更稳健高效的优化。理论上,我们证明了随着每个思维生成答案数量的增加,思维优势的方差会降低。实证中,我们的梯度分析验证了这一效果,显示GRPO-MA相较于GRPO减少了梯度尖峰。在数学、编程及多样化多模态任务上的实验表明,GRPO-MA显著提升了模型性能与训练效率。进一步的消融研究揭示,增加每个思维的答案数量持续增强模型表现。
English
Recent progress, such as DeepSeek-R1, has shown that the GRPO algorithm, a Reinforcement Learning (RL) approach, can effectively train Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) and Vision-Language Models (VLMs). In this paper, we analyze three challenges of GRPO: gradient coupling between thoughts and answers, sparse reward signals caused by limited parallel sampling, and unstable advantage estimation. To mitigate these challenges, we propose GRPO-MA, a simple yet theoretically grounded method that leverages multi-answer generation from each thought process, enabling more robust and efficient optimization. Theoretically, we show that the variance of thought advantage decreases as the number of answers per thought increases. Empirically, our gradient analysis confirms this effect, showing that GRPO-MA reduces gradient spikes compared to GRPO. Experiments on math, code, and diverse multimodal tasks demonstrate that GRPO-MA substantially improves performance and training efficiency. Our ablation studies further reveal that increasing the number of answers per thought consistently enhances model performance.
PDF42September 30, 2025