专家路由:高效奖励引导的大型语言模型集成
Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models
November 15, 2023
作者: Keming Lu, Hongyi Yuan, Runji Lin, Junyang Lin, Zheng Yuan, Chang Zhou, Jingren Zhou
cs.AI
摘要
大型语言模型(LLM)的互补潜力假设现成的LLM在广泛领域和任务中具有异质专业知识,因此LLM集合可以实现更加一致的性能提升。现有的LLM集成方法主要侧重于奖励模型输出的排名,导致了显著的计算开销。为了解决这个问题,我们重新审视了LLM的互补潜力,并通过利用现成的奖励模型挖掘潜在专业知识进行了进一步阐述。我们提出了Zooter,一种基于奖励引导的路由方法,通过在训练查询上提炼奖励来训练一个路由函数,该函数可以精确地将每个查询分配给具有相关专业知识的LLM。我们还整合了基于标签的标签增强方法,以减轻在使用奖励作为银标注时由不确定性引起的噪音。Zooter在推理中表现出计算效率,因为与奖励模型排名方法相比,引入了一个仅具有轻微计算开销的路由函数。我们在包含26个不同领域和任务子集的全面基准集上评估了Zooter。Zooter在平均性能上优于最佳单模型,并在44%的任务上排名第一,甚至超过了多种奖励模型排名方法。
English
The complementary potential of Large Language Models (LLM) assumes
off-the-shelf LLMs have heterogeneous expertise in a wide range of domains and
tasks so that an ensemble of LLMs can achieve consistently better performance.
Existing ensemble methods for LLMs mainly focus on reward model ranking of
outputs, leading to significant computation overhead. To combat this issue, we
revisit the complementary potential of LLMs and further elaborate it by mining
latent expertise with off-the-shelf reward models. We propose Zooter, a
reward-guided routing method distilling rewards on training queries to train a
routing function, which can precisely distribute each query to the LLM with
expertise about it. We also integrate a tag-based label enhancement to mitigate
noise from uncertainty when using rewards as silver supervision. Zooter shows
computation efficiency in inference as it introduces only a minor computation
overhead of a routing function compared with reward model ranking methods. We
evaluate Zooter on a comprehensive benchmark collection with 26 subsets on
different domains and tasks. Zooter outperforms the best single model on
average and ranks first on 44% of tasks, even surpassing multiple reward model
ranking methods.