ChatPaper.aiChatPaper

使用双层路由混合专家模型将持续学习扩展到300多个任务

Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts

May 8, 2026
作者: Meng Lou, Yunxiang Fu, Yizhou Yu
cs.AI

摘要

持续学习,特别是基于预训练模型(PTM)的类增量学习(CIL),近年来引起了广泛的研究兴趣。然而,如何在保持极长任务序列的稳定性与可塑性的同时,有效学习兼具判别性和全面性的特征表示,仍是一个未解决的问题。我们提出CaRE,一种可扩展的持续学习器,其核心是基于高效双级路由专家混合模型(BR-MoE)。BR-MoE的核心思想是一种双级路由机制:首先通过路由器选择阶段动态激活相关的任务特定路由器,随后通过专家路由阶段动态激活并聚合专家,旨在将判别性和全面性的表示注入每个中间网络层。另一方面,我们引入了一个具有挑战性的数据集OmniBenchmark-1K,用于评估包含数百个任务的极长任务序列上的CIL性能。大量实验表明,CaRE在多种数据集和任务设置(包括经典CIL设置下的常用CIL数据集,如5-20个任务)中均展现出领先性能。据我们所知,CaRE是首个能够扩展到极长任务序列(从100个到超过300个非重叠任务)的持续学习器,且在此类任务序列上大幅超越所有基线方法。我们希望这项工作能够激发对极长任务序列持续学习的进一步研究。代码和数据集已公开于https://github.com/LMMMEng/CaRE。
English
Continual learning, especially class-incremental learning (CIL), on the basis of a pre-trained model (PTM) has garnered substantial research interest in recent years. However, how to effectively learn both discriminative and comprehensive feature representations while maintaining stability and plasticity over very long task sequences remains an open problem. We propose CaRE, a scalable {C}ontinual Le{a}rner with efficient Bi-Level {R}outing Mixture-of-{E}xperts (BR-MoE). The core idea of BR-MoE is a bi-level routing mechanism: a router selection stage that dynamically activates relevant task-specific routers, followed by an expert routing phase that dynamically activates and aggregates experts, aiming to inject discriminative and comprehensive representations into every intermediate network layer. On the other hand, we introduce a challenging dataset, OmniBenchmark-1K, for CIL performance evaluation on very long task sequences with hundreds of tasks. Extensive experiments show that CaRE demonstrates leading performance across a variety of datasets and task settings, including commonly used CIL datasets with classical CIL settings (e.g., 5-20 tasks). To the best of our knowledge, CaRE is the first continual learner that scales to very long task sequences (ranging from 100 to over 300 non-overlapping tasks), while outperforming all baselines by a large margin on such task sequences. We hope that this work will inspire further research into continual learning over extremely long task sequences. Code and dataset are publicly released at https://github.com/LMMMEng/CaRE.
PDF71May 12, 2026