ChatPaper.aiChatPaper

EDGE-GRPO:基於熵驅動的GRPO與優勢多樣性引導式錯誤校正

EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity

July 29, 2025
作者: Xingjian Zhang, Siwei Wen, Wenjun Wu, Lei Huang
cs.AI

摘要

大型语言模型(LLMs)在通过强化学习提升逐步推理能力方面取得了显著进展。然而,依赖稀疏奖励规则的群体相对策略优化(GRPO)算法常常面临组内奖励相同的问题,导致优势崩溃现象。现有研究通常从两个角度应对这一挑战:强制模型反思以增强响应多样性,以及引入内部反馈以增强训练信号(优势)。在本研究中,我们首先分析了模型反思的局限性,并在细粒度样本层面探讨了响应策略的熵。基于实验发现,我们提出了EDGE-GRPO算法,该算法采用熵驱动优势和引导式错误校正,有效缓解了优势崩溃问题。在多个主要推理基准上的广泛实验证明了我们方法的有效性和优越性。相关资源可在https://github.com/ZhangXJ199/EDGE-GRPO获取。
English
Large Language Models (LLMs) have made remarkable progress in enhancing step-by-step reasoning through reinforcement learning. However, the Group Relative Policy Optimization (GRPO) algorithm, which relies on sparse reward rules, often encounters the issue of identical rewards within groups, leading to the advantage collapse problem. Existing works typically address this challenge from two perspectives: enforcing model reflection to enhance response diversity, and introducing internal feedback to augment the training signal (advantage). In this work, we begin by analyzing the limitations of model reflection and investigating the policy entropy of responses at the fine-grained sample level. Based on our experimental findings, we propose the EDGE-GRPO algorithm, which adopts Entropy-Driven Advantage and Guided Error Correction to effectively mitigate the problem of advantage collapse. Extensive experiments on several main reasoning benchmarks demonstrate the effectiveness and superiority of our approach. It is available at https://github.com/ZhangXJ199/EDGE-GRPO.
PDF52July 30, 2025