ChatPaper.aiChatPaper

融合就是你所需要的一切:比万亿参数LLM更便宜、更好的选择

Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM

January 4, 2024
作者: Xiaoding Lu, Adian Liusie, Vyas Raina, Yuwen Zhang, William Beauchamp
cs.AI

摘要

在对话式人工智能研究中,有一个明显的趋势是开发参数更多的模型,例如ChatGPT等模型。虽然这些庞大的模型往往能够生成越来越好的聊天响应,但它们需要大量的计算资源和内存。本研究探讨了一个相关问题:一组较小模型的组合是否可以协作实现与单一大模型相当或更好的性能?我们引入了一种称为“混合”的方法,这是一种简单而有效的集成多个聊天人工智能的方法。我们的实证证据表明,当特定的较小模型被协同混合时,它们有可能超越或匹敌远大于自己的对应模型的能力。例如,仅集成三个中等规模的模型(6B/13B参数)就可以与远大于自己的模型ChatGPT(175B+参数)的性能指标相匹敌甚至超越。这一假设经过严格测试,使用A/B测试方法,在Chai研究平台上的大量用户基础上,历时三十天。研究结果强调了“混合”策略作为一种可行的方法,可以提升聊天人工智能的效能,而无需相应增加计算需求。
English
In conversational AI research, there's a noticeable trend towards developing models with a larger number of parameters, exemplified by models like ChatGPT. While these expansive models tend to generate increasingly better chat responses, they demand significant computational resources and memory. This study explores a pertinent question: Can a combination of smaller models collaboratively achieve comparable or enhanced performance relative to a singular large model? We introduce an approach termed "blending", a straightforward yet effective method of integrating multiple chat AIs. Our empirical evidence suggests that when specific smaller models are synergistically blended, they can potentially outperform or match the capabilities of much larger counterparts. For instance, integrating just three models of moderate size (6B/13B paramaeters) can rival or even surpass the performance metrics of a substantially larger model like ChatGPT (175B+ paramaters). This hypothesis is rigorously tested using A/B testing methodologies with a large user base on the Chai research platform over a span of thirty days. The findings underscore the potential of the "blending" strategy as a viable approach for enhancing chat AI efficacy without a corresponding surge in computational demands.
PDF520December 15, 2024