基于FusionRoute的令牌级大语言模型协同机制
Token-Level LLM Collaboration via FusionRoute
January 8, 2026
作者: Nuoya Xiong, Yuhang Zhou, Hanqing Zeng, Zhaorun Chen, Furong Huang, Shuchao Bi, Lizhu Zhang, Zhuokai Zhao
cs.AI
摘要
大型语言模型(LLMs)在多个领域展现出卓越能力。然而,要实现单一通用模型在所有领域的强劲性能,通常需要将模型规模扩展至训练和部署成本难以承受的程度。另一方面,虽然小型领域专用模型效率更高,但其难以泛化至训练分布之外的数据。为解决这一困境,我们提出FusionRoute——一种鲁棒且高效的令牌级多LLM协作框架。该框架通过轻量级路由器的双重作用实现:(i)在每一步解码时选择最合适的专家模型;(ii)通过逻辑值叠加提供互补性逻辑输出,以优化或修正所选专家的下一令牌概率分布。与仅依赖固定专家输出的现有令牌级协作方法不同,我们的理论分析表明纯专家路由存在根本局限:除非满足强全局覆盖假设,否则通常无法实现最优解码策略。通过将专家选择与可训练的互补生成器相结合,FusionRoute扩展了有效策略类别,并在温和条件下实现了最优值函数恢复。实证研究表明,基于Llama-3和Gemma-2模型系列,在数学推理、代码生成、指令遵循等多样化基准测试中,FusionRoute在保持与领域专家模型任务表现相当的同时,显著优于序列级/令牌级协作、模型融合及直接微调方法。
English
Large language models (LLMs) exhibit strengths across diverse domains. However, achieving strong performance across these domains with a single general-purpose model typically requires scaling to sizes that are prohibitively expensive to train and deploy. On the other hand, while smaller domain-specialized models are much more efficient, they struggle to generalize beyond their training distributions. To address this dilemma, we propose FusionRoute, a robust and effective token-level multi-LLM collaboration framework in which a lightweight router simultaneously (i) selects the most suitable expert at each decoding step and (ii) contributes a complementary logit that refines or corrects the selected expert's next-token distribution via logit addition. Unlike existing token-level collaboration methods that rely solely on fixed expert outputs, we provide a theoretical analysis showing that pure expert-only routing is fundamentally limited: unless strong global coverage assumptions hold, it cannot in general realize the optimal decoding policy. By augmenting expert selection with a trainable complementary generator, FusionRoute expands the effective policy class and enables recovery of optimal value functions under mild conditions. Empirically, across both Llama-3 and Gemma-2 families and diverse benchmarks spanning mathematical reasoning, code generation, and instruction following, FusionRoute outperforms both sequence- and token-level collaboration, model merging, and direct fine-tuning, while remaining competitive with domain experts on their respective tasks.