ChatPaper.aiChatPaper

通过对齐训练与推理路由器实现MoE强化学习的稳定性优化

Stabilizing MoE Reinforcement Learning by Aligning Training and Inference Routers

October 13, 2025
作者: Wenhan Ma, Hailin Zhang, Liang Zhao, Yifan Song, Yudong Wang, Zhifang Sui, Fuli Luo
cs.AI

摘要

强化学习(RL)已成为提升大语言模型能力的关键方法。然而在混合专家(MoE)模型中,路由机制常引发训练不稳定性,甚至导致灾难性的强化学习训练崩溃。我们通过分析MoE模型的训练-推理一致性,发现两个阶段的路由行为存在显著差异。更关键的是,即使在相同条件下,路由框架在多次前向传播中也可能产生不同的专家选择结果。为解决这一根本性不一致问题,我们提出滚动路由重放(R3)方法,通过记录推理引擎的路由分布并在训练阶段重放,显著降低了训练-推理策略的KL散度,在保持训练速度的同时有效缓解了极端差异。多场景实验表明,R3能成功稳定RL训练,避免崩溃现象,其性能优于GSPO和TIS等方法。我们相信这项研究能为MoE模型的RL训练稳定性提供新的解决方案。
English
Reinforcement learning (RL) has emerged as a crucial approach for enhancing the capabilities of large language models. However, in Mixture-of-Experts (MoE) models, the routing mechanism often introduces instability, even leading to catastrophic RL training collapse. We analyze the training-inference consistency of MoE models and identify a notable discrepancy in routing behaviors between the two phases. Moreover, even under identical conditions, the routing framework can yield divergent expert selections across repeated forward passes. To address this foundational inconsistency, we propose Rollout Routing Replay (R3), a method that records routing distributions from the inference engine and replays them during training. R3 significantly reduces training-inference policy KL divergence and mitigates extreme discrepancies without compromising training speed. Extensive experiments on various settings confirm that R3 succeeds in stabilizing RL training, preventing collapse and outperforming methods such as GSPO and TIS. We believe this work can offer a new solution for stabilizing RL in MoE models.
PDF31December 17, 2025