ChatPaper.aiChatPaper

Fiddler:用于混合专家模型快速推理的CPU-GPU编排

Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models

February 10, 2024
作者: Keisuke Kamahori, Yile Gu, Kan Zhu, Baris Kasikci
cs.AI

摘要

基于混合专家(MoE)架构的大型语言模型(LLMs)在各种任务上显示出有希望的性能。然而,在资源受限的环境中运行它们,即GPU内存资源不丰富的情况下,由于模型规模庞大,是具有挑战性的。现有的将模型权重转移到CPU内存的系统饱受频繁在CPU和GPU之间移动数据的重大开销之苦。在本文中,我们提出了Fiddler,这是一个具有CPU-GPU协同的资源高效推理引擎,适用于MoE模型。Fiddler的关键思想是利用CPU的计算能力来最小化CPU和GPU之间的数据移动。我们的评估表明,Fiddler可以在单个具有24GB内存的GPU上运行未压缩的Mixtral-8x7B模型,该模型参数超过90GB,每秒生成超过3个标记,相比现有方法,显示出数量级的改进。Fiddler的代码可在以下网址公开获取:https://github.com/efeslab/fiddler
English
Large Language Models (LLMs) based on Mixture-of-Experts (MoE) architecture are showing promising performance on various tasks. However, running them on resource-constrained settings, where GPU memory resources are not abundant, is challenging due to huge model sizes. Existing systems that offload model weights to CPU memory suffer from the significant overhead of frequently moving data between CPU and GPU. In this paper, we propose Fiddler, a resource-efficient inference engine with CPU-GPU orchestration for MoE models. The key idea of Fiddler is to use the computation ability of the CPU to minimize the data movement between the CPU and GPU. Our evaluation shows that Fiddler can run the uncompressed Mixtral-8x7B model, which exceeds 90GB in parameters, to generate over 3 tokens per second on a single GPU with 24GB memory, showing an order of magnitude improvement over existing methods. The code of Fiddler is publicly available at https://github.com/efeslab/fiddler

Summary

AI-Generated Summary

PDF171December 15, 2024