闪电LLM:具有有限内存的高效大型语言模型推理
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
December 12, 2023
作者: Keivan Alizadeh, Iman Mirzadeh, Dmitry Belenko, Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar
cs.AI
摘要
大型语言模型(LLMs)是现代自然语言处理的核心,在各种任务中表现出色。然而,它们的计算和内存需求巨大,尤其对于内存容量有限的设备而言存在挑战。本文解决了运行超出可用DRAM容量的LLMs的高效性挑战,方法是将模型参数存储在闪存中,按需将其调入DRAM。我们的方法涉及构建一个与闪存行为协调的推理成本模型,指导我们在两个关键领域进行优化:减少从闪存传输的数据量,并以更大、更连续的块读取数据。在这个以闪存为基础的框架内,我们引入了两种主要技术。首先,“窗口化”策略性地减少数据传输,通过重复使用先前激活的神经元,其次,“行列捆绑”根据闪存的顺序数据访问优势,增加了从闪存读取的数据块大小。这些方法共同使模型能够在可用DRAM大小的两倍范围内运行,在CPU和GPU中,推理速度相比于朴素加载方法分别提高了4-5倍和20-25倍。我们整合了稀疏感知、上下文自适应加载和面向硬件的设计,为在内存有限的设备上有效推理LLMs铺平了道路。
English
Large language models (LLMs) are central to modern natural language
processing, delivering exceptional performance in various tasks. However, their
intensive computational and memory requirements present challenges, especially
for devices with limited DRAM capacity. This paper tackles the challenge of
efficiently running LLMs that exceed the available DRAM capacity by storing the
model parameters on flash memory but bringing them on demand to DRAM. Our
method involves constructing an inference cost model that harmonizes with the
flash memory behavior, guiding us to optimize in two critical areas: reducing
the volume of data transferred from flash and reading data in larger, more
contiguous chunks. Within this flash memory-informed framework, we introduce
two principal techniques. First, "windowing'" strategically reduces data
transfer by reusing previously activated neurons, and second, "row-column
bundling", tailored to the sequential data access strengths of flash memory,
increases the size of data chunks read from flash memory. These methods
collectively enable running models up to twice the size of the available DRAM,
with a 4-5x and 20-25x increase in inference speed compared to naive loading
approaches in CPU and GPU, respectively. Our integration of sparsity awareness,
context-adaptive loading, and a hardware-oriented design paves the way for
effective inference of LLMs on devices with limited memory.