ChatPaper.aiChatPaper

平衡聚合:理解与修正GRPO中的聚合偏差

Balanced Aggregation: Understanding and Fixing Aggregation Bias in GRPO

April 14, 2026
作者: Zhiyuan Zeng, Jiameng Huang, Zhangyue Yin, Jiashuo Liu, Ziniu Li, Bingrui Li, Yuhao Wu, Yining Zheng, Ge Zhang, Wenhao Huang, Xipeng Qiu
cs.AI

摘要

可验证奖励强化学习(RLVR)已成为提升大语言模型推理与代码生成能力的核心范式,其中GRPO风格训练因其简洁高效被广泛采用。然而,一个关键设计选择尚未得到充分探索:采样组内如何聚合词元级策略梯度项。标准GRPO采用序列聚合,而近期研究主张词元聚合是更优方案。我们证明这两种规则会引发不同的优化偏差:词元聚合会引入符号-长度耦合效应,而序列聚合通过序列级等权机制隐式降低长响应的权重。为解决这一矛盾,我们提出平衡聚合(BA)——一种可直接替换的简易方法,其分别在正负子集内计算词元级均值,再通过基于序列数量的权重进行组合。在Qwen2.5-Math-7B和Qwen3-1.7B模型上使用DAPO-17k和Polaris数据集的实验表明,相较于标准词元与序列聚合,BA在六个推理与代码生成基准测试中持续提升训练稳定性与最终性能。进一步分析揭示,词元与序列聚合的相对效果主要受响应长度变异度及正负样本长度差调控,凸显出聚合规则是GRPO风格RLVR中至关重要的设计维度。
English
Reinforcement learning with verifiable rewards (RLVR) has become a central paradigm for improving reasoning and code generation in large language models, and GRPO-style training is widely adopted for its simplicity and effectiveness. However, an important design choice remains underexplored: how token-level policy gradient terms are aggregated within each sampled group. Standard GRPO uses sequence aggregation, while recent work has advocated token aggregation as a better alternative. We show that these two rules induce different optimization biases: token aggregation introduces sign-length coupling, while sequence aggregation implicitly downweights longer responses through sequence-level equal weighting. To address this tension, we propose Balanced Aggregation (BA), a simple drop-in replacement that computes token-level means separately within the positive and negative subsets and then combines them with sequence-count-based weights. Experiments with Qwen2.5-Math-7B and Qwen3-1.7B on DAPO-17k and Polaris, evaluated on six reasoning and coding benchmarks, show that BA consistently improves training stability and final performance over standard token and sequence aggregation. Our analysis further shows that the relative effectiveness of token and sequence aggregation is largely governed by response-length variation and the positive-negative length gap, highlighting aggregation as a critical design dimension in GRPO-style RLVR.
PDF21May 9, 2026