动态重连专家:在混合专家模型中实现持续路由优化以提升在线适应能力
Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models
October 16, 2025
作者: Guinan Su, Yanwu Yang, Li Shen, Lu Yin, Shiwei Liu, Jonas Geiping
cs.AI
摘要
专家混合模型(Mixture-of-Experts, MoE)通过稀疏的专家激活实现了高效扩展,但在部署过程中常因分布偏移而导致路由决策欠佳。尽管现有的测试时适应方法可能解决这些问题,但它们主要针对密集模型,且需要访问外部数据,这限制了其在MoE架构中的实际应用。然而,我们发现,无需依赖参考数据,仅基于输入上下文即可动态优化MoE专家选择。因此,我们提出了一种无需数据、在线测试时的框架,该框架在文本生成过程中持续适应MoE路由决策,无需外部监督或数据。我们的方法在两个阶段间循环:在预填充阶段及之后的定期间隔中,我们基于已生成的序列,利用自监督优化模型的路由决策;随后,正常生成文本,保持修改后的路由直至下一次适应。我们通过轻量级的加性向量实现这一点,这些向量仅更新选定层的路由逻辑,在保持计算效率的同时防止过度适应。实验结果表明,在具有挑战性的推理任务上,我们的方法持续提升了性能,同时保持了对上下文偏移的鲁棒性。例如,在HumanEval上,我们的方法结合OLMoE实现了5.5%的提升。此外,得益于其即插即用的特性,我们的方法自然补充了现有的测试时扩展技术,例如,与DeepSeek-V2-Lite上的自一致性结合时,平均提升了6%。
English
Mixture-of-Experts (MoE) models achieve efficient scaling through sparse
expert activation, but often suffer from suboptimal routing decisions due to
distribution shifts in deployment. While existing test-time adaptation methods
could potentially address these issues, they primarily focus on dense models
and require access to external data, limiting their practical applicability to
MoE architectures. However, we find that, instead of relying on reference data,
we can optimize MoE expert selection on-the-fly based only on input context. As
such, we propose a data-free, online test-time framework that
continuously adapts MoE routing decisions during text generation without
external supervision or data. Our method cycles between two phases: During the
prefill stage, and later in regular intervals, we optimize the routing
decisions of the model using self-supervision based on the already generated
sequence. Then, we generate text as normal, maintaining the modified router
until the next adaption. We implement this through lightweight additive vectors
that only update router logits in selected layers, maintaining computational
efficiency while preventing over-adaptation. The experimental results show
consistent performance gains on challenging reasoning tasks while maintaining
robustness to context shifts. For example, our method achieves a 5.5\%
improvement on HumanEval with OLMoE. Furthermore, owing to its plug-and-play
property, our method naturally complements existing test-time scaling
techniques, e.g., achieving 6\% average gains when incorporated with
self-consistency on DeepSeek-V2-Lite.