ChatPaper.aiChatPaper

Brainstacks:基於凍結MoE-LoRA堆疊的跨領域認知能力實現持續性大語言模型學習

Brainstacks: Cross-Domain Cognitive Capabilities via Frozen MoE-LoRA Stacks for Continual LLM Learning

April 1, 2026
作者: Mohammad R. Abu Ayyash
cs.AI

摘要

我们提出Brainstacks——一种用于大语言模型持续多领域微调的模块化架构,该架构将领域专业知识封装为冻结的适配器堆栈,在推理时基于共享冻结基础模型进行可叠加组合。该架构包含五个核心组件:(1)采用MoE-LoRA机制,在QLoRA 4比特量化与rsLoRA缩放条件下,对所有七个Transformer投影层实施Shazeer式带噪top-2路由;(2)通过内循环实现残差增强:冻结已训练堆栈并叠加新堆栈;(3)通过外循环按课程依赖顺序训练领域专属堆栈;(4)基于随机SVD的零空间投影技术,将新堆栈约束至与既有方向正交的子空间,实现完全隔离下的零遗忘;(5)基于实证发现的领域组合目标训练sigmoid元路由器,通过选择性加权实现跨领域堆栈组合。两项边界实验:(6)在随机初始化模型上进行PSN预训练;(7)实施领域专属强化学习(DPO/GRPO)验证与SFT后对齐的兼容性。在TinyLlama-1.1B(4领域9堆栈)和Gemma 3 12B IT(5领域10堆栈)上的验证表明:MoE-LoRA收敛速度较参数匹配的单LoRA提升2.5倍,残差增强突破单堆栈性能上限,路由系统可修复无门控堆栈累积导致的生成质量劣化。核心发现:基于结果的路由器揭示领域堆栈编码的是可迁移的认知基元(指令遵循清晰度、数值推理、程序逻辑、思维链结构)而非领域知识,医疗提示案例中97%的路由指向聊天+数学堆栈,尽管这些堆栈未包含任何医疗数据。
English
We present Brainstacks, a modular architecture for continual multi-domain fine-tuning of large language models that packages domain expertise as frozen adapter stacks composing additively on a shared frozen base at inference. Five interlocking components: (1) MoE-LoRA with Shazeer-style noisy top-2 routing across all seven transformer projections under QLoRA 4-bit quantization with rsLoRA scaling; (2) an inner loop performing residual boosting by freezing trained stacks and adding new ones; (3) an outer loop training sequential domain-specific stacks with curriculum-ordered dependencies; (4) null-space projection via randomized SVD constraining new stacks to subspaces orthogonal to prior directions, achieving zero forgetting in isolation; (5) an outcome-based sigmoid meta-router trained on empirically discovered domain-combination targets that selectively weights stacks, enabling cross-domain composition. Two boundary experiments: (6) PSN pretraining on a randomly initialized model; (7) per-domain RL (DPO/GRPO) validating compatibility with post-SFT alignment. Validated on TinyLlama-1.1B (4 domains, 9 stacks) and Gemma 3 12B IT (5 domains, 10 stacks), MoE-LoRA achieves 2.5x faster convergence than parameter-matched single LoRA, residual boosting breaks through the single-stack ceiling, and the routed system recovers generation quality destroyed by ungated stack accumulation. The central finding: the outcome-based router discovers that domain stacks encode transferable cognitive primitives (instruction-following clarity, numerical reasoning, procedural logic, chain-of-thought structure) rather than domain-specific knowledge, with medical prompts routing to chat+math stacks in 97% of cases despite zero medical data in those stacks.
PDF01April 4, 2026