利用雙層路由混合專家模型將持續學習擴展至300+任務
Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts
May 8, 2026
作者: Meng Lou, Yunxiang Fu, Yizhou Yu
cs.AI
摘要
近年来,基于预训练模型(PTM)的持续学习,尤其是类增量学习(CIL),引起了广泛的研究兴趣。然而,如何在极长任务序列中同时学习具有判别性和综合性的特征表示,同时保持稳定性与可塑性,仍是一个未解决的问题。我们提出CaRE,一种可扩展的持续学习器,采用高效的双层路由混合专家模型(BR-MoE)。BR-MoE的核心思想是一种双层路由机制:首先通过路由器选择阶段动态激活相关的任务特定路由器,随后通过专家路由阶段动态激活并聚合专家,旨在将判别性和综合性表示注入每一中间网络层。另一方面,我们引入了一个具有挑战性的数据集OmniBenchmark-1K,用于评估在包含数百个任务的极长任务序列上的CIL性能。大量实验表明,CaRE在多种数据集和任务设置下均展现出领先性能,包括经典CIL设置下(如5-20个任务)的常用CIL数据集。据我们所知,CaRE是首个能够扩展到极长任务序列(从100个到超过300个不重叠任务)的持续学习器,并在此类任务序列上以显著优势超越所有基线。我们希望这项工作能激发对极长任务序列持续学习的进一步研究。代码和数据集已在https://github.com/LMMMEng/CaRE公开发布。
English
Continual learning, especially class-incremental learning (CIL), on the basis of a pre-trained model (PTM) has garnered substantial research interest in recent years. However, how to effectively learn both discriminative and comprehensive feature representations while maintaining stability and plasticity over very long task sequences remains an open problem. We propose CaRE, a scalable {C}ontinual Le{a}rner with efficient Bi-Level {R}outing Mixture-of-{E}xperts (BR-MoE). The core idea of BR-MoE is a bi-level routing mechanism: a router selection stage that dynamically activates relevant task-specific routers, followed by an expert routing phase that dynamically activates and aggregates experts, aiming to inject discriminative and comprehensive representations into every intermediate network layer. On the other hand, we introduce a challenging dataset, OmniBenchmark-1K, for CIL performance evaluation on very long task sequences with hundreds of tasks. Extensive experiments show that CaRE demonstrates leading performance across a variety of datasets and task settings, including commonly used CIL datasets with classical CIL settings (e.g., 5-20 tasks). To the best of our knowledge, CaRE is the first continual learner that scales to very long task sequences (ranging from 100 to over 300 non-overlapping tasks), while outperforming all baselines by a large margin on such task sequences. We hope that this work will inspire further research into continual learning over extremely long task sequences. Code and dataset are publicly released at https://github.com/LMMMEng/CaRE.