ChatPaper.aiChatPaper

基於融合路由的詞元級大模型協作 (注:此標題採用學術翻譯策略,其中"FusionRoute"結合技術內涵譯為"融合路由","Token-Level"精準對應為"詞元級"以保持自然語言處理領域的術語一致性)

Token-Level LLM Collaboration via FusionRoute

January 8, 2026
作者: Nuoya Xiong, Yuhang Zhou, Hanqing Zeng, Zhaorun Chen, Furong Huang, Shuchao Bi, Lizhu Zhang, Zhuokai Zhao
cs.AI

摘要

大型語言模型(LLMs)在多個領域展現出卓越能力。然而,要實現單一通用模型在這些領域的強勁性能,通常需要擴展至訓練和部署成本過於高昂的規模。另一方面,儘管小型領域專用模型效率更高,但其難以泛化至訓練分佈之外的場景。為解決此困境,我們提出FusionRoute——一個魯棒且有效的詞元級多LLM協作框架,其中輕量級路由模組同步實現:(i)在每個解碼步驟選擇最合適的專家模型,(ii)通過對數加法提供互補對數值,以優化或校正所選專家的下個詞元分佈。有別於僅依賴固定專家輸出的現有詞元級協作方法,我們通過理論分析證明純專家路由存在根本局限:除非滿足強全局覆蓋假設,否則通常無法實現最優解碼策略。FusionRoute通過可訓練的互補生成器增強專家選擇機制,擴展了有效策略類別,並能在溫和條件下恢復最優值函數。在Llama-3與Gemma-2模型系列、以及涵蓋數學推理、代碼生成與指令遵循的多樣化基準測試中,FusionRoute在各自任務上不僅勝過序列級與詞元級協作、模型融合及直接微調方法,同時保持與領域專家的競爭力。
English
Large language models (LLMs) exhibit strengths across diverse domains. However, achieving strong performance across these domains with a single general-purpose model typically requires scaling to sizes that are prohibitively expensive to train and deploy. On the other hand, while smaller domain-specialized models are much more efficient, they struggle to generalize beyond their training distributions. To address this dilemma, we propose FusionRoute, a robust and effective token-level multi-LLM collaboration framework in which a lightweight router simultaneously (i) selects the most suitable expert at each decoding step and (ii) contributes a complementary logit that refines or corrects the selected expert's next-token distribution via logit addition. Unlike existing token-level collaboration methods that rely solely on fixed expert outputs, we provide a theoretical analysis showing that pure expert-only routing is fundamentally limited: unless strong global coverage assumptions hold, it cannot in general realize the optimal decoding policy. By augmenting expert selection with a trainable complementary generator, FusionRoute expands the effective policy class and enables recovery of optimal value functions under mild conditions. Empirically, across both Llama-3 and Gemma-2 families and diverse benchmarks spanning mathematical reasoning, code generation, and instruction following, FusionRoute outperforms both sequence- and token-level collaboration, model merging, and direct fine-tuning, while remaining competitive with domain experts on their respective tasks.
PDF220January 10, 2026