ChatPaper.aiChatPaper

專家路由:高效獎勵引導的大型語言模型集成

Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models

November 15, 2023
作者: Keming Lu, Hongyi Yuan, Runji Lin, Junyang Lin, Zheng Yuan, Chang Zhou, Jingren Zhou
cs.AI

摘要

大型語言模型(LLM)的互補潛力假設是,現成的LLM在廣泛領域和任務中具有異質專業知識,因此LLM集成體能夠實現更一致優異的表現。現有的LLM集成方法主要集中在對輸出進行獎勵模型排名,導致顯著的計算開銷。為了應對這個問題,我們重新審視LLM的互補潛力,並通過利用現成的獎勵模型挖掘潛在專業知識來進行進一步闡述。我們提出了Zooter,一種基於獎勵引導的路由方法,通過將訓練查詢的獎勵提煉到一個路由函數上,該函數可以精確地將每個查詢分發給具有相關專業知識的LLM。我們還整合了基於標籤的標籤增強功能,以減輕在使用獎勵作為銀牌監督時來自不確定性的噪音。Zooter在推斷中顯示出計算效率,因為與獎勵模型排名方法相比,引入路由函數僅帶來輕微的計算開銷。我們在包含26個不同領域和任務子集的全面基準集上對Zooter進行評估。Zooter在平均表現上優於最佳單一模型,並在44%的任務中排名第一,甚至超越多個獎勵模型排名方法。
English
The complementary potential of Large Language Models (LLM) assumes off-the-shelf LLMs have heterogeneous expertise in a wide range of domains and tasks so that an ensemble of LLMs can achieve consistently better performance. Existing ensemble methods for LLMs mainly focus on reward model ranking of outputs, leading to significant computation overhead. To combat this issue, we revisit the complementary potential of LLMs and further elaborate it by mining latent expertise with off-the-shelf reward models. We propose Zooter, a reward-guided routing method distilling rewards on training queries to train a routing function, which can precisely distribute each query to the LLM with expertise about it. We also integrate a tag-based label enhancement to mitigate noise from uncertainty when using rewards as silver supervision. Zooter shows computation efficiency in inference as it introduces only a minor computation overhead of a routing function compared with reward model ranking methods. We evaluate Zooter on a comprehensive benchmark collection with 26 subsets on different domains and tasks. Zooter outperforms the best single model on average and ranks first on 44% of tasks, even surpassing multiple reward model ranking methods.
PDF130December 15, 2024