ChatPaper.aiChatPaper

GRPO-MA:GRPO中的多答案生成,用於穩定且高效的思維鏈訓練

GRPO-MA: Multi-Answer Generation in GRPO for Stable and Efficient Chain-of-Thought Training

September 29, 2025
作者: Hongcheng Wang, Yinuo Huang, Sukai Wang, Guanghui Ren, Hao Dong
cs.AI

摘要

近期进展,如DeepSeek-R1,已表明GRPO算法——一种强化学习(RL)方法,能有效训练大型语言模型(LLMs)及视觉语言模型(VLMs)中的思维链(CoT)推理。本文中,我们剖析了GRPO面临的三大挑战:思维与答案间的梯度耦合、有限并行采样导致的稀疏奖励信号,以及优势估计的不稳定性。为应对这些挑战,我们提出了GRPO-MA,这一方法虽简洁却理论扎实,它通过从每一思维过程生成多答案,实现了更为稳健且高效的优化。理论上,我们证明了随着每思维生成答案数量的增加,思维优势的方差随之降低。实证上,我们的梯度分析验证了此效应,显示GRPO-MA相较于GRPO减少了梯度尖峰。在数学、编程及多样化多模态任务上的实验表明,GRPO-MA显著提升了性能与训练效率。我们的消融研究进一步揭示,增加每思维生成的答案数量持续增强模型表现。
English
Recent progress, such as DeepSeek-R1, has shown that the GRPO algorithm, a Reinforcement Learning (RL) approach, can effectively train Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) and Vision-Language Models (VLMs). In this paper, we analyze three challenges of GRPO: gradient coupling between thoughts and answers, sparse reward signals caused by limited parallel sampling, and unstable advantage estimation. To mitigate these challenges, we propose GRPO-MA, a simple yet theoretically grounded method that leverages multi-answer generation from each thought process, enabling more robust and efficient optimization. Theoretically, we show that the variance of thought advantage decreases as the number of answers per thought increases. Empirically, our gradient analysis confirms this effect, showing that GRPO-MA reduces gradient spikes compared to GRPO. Experiments on math, code, and diverse multimodal tasks demonstrate that GRPO-MA substantially improves performance and training efficiency. Our ablation studies further reveal that increasing the number of answers per thought consistently enhances model performance.
PDF42September 30, 2025