Deja Vu:推理时间的高效LLM的上下文稀疏化
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
October 26, 2023
作者: Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen
cs.AI
摘要
拥有数千亿参数的大型语言模型(LLMs)引发了一波新激动人心的人工智能应用。然而,在推断时它们的计算成本很高。稀疏性是减少这种成本的一种自然方法,但现有方法要么需要昂贵的重新训练,要么必须放弃LLM的上下文学习能力,要么在现代硬件上无法实现挂钟时间加速。我们假设上下文稀疏性,即产生大致相同输出结果的小型、输入相关的注意力头和MLP参数集合,可以解决这些问题。我们展示了上下文稀疏性的存在,它可以被准确预测,并且我们可以利用它加速LLM的推断过程,而不会影响LLM的质量或上下文学习能力。基于这些见解,我们提出了DejaVu,这是一个系统,它使用低成本算法根据每一层的输入动态预测上下文稀疏性,同时采用异步和硬件感知实现来加速LLM的推断。我们验证了DejaVu相较于最先进的FasterTransformer,以及广泛使用的Hugging Face实现,可以将OPT-175B的推断延迟缩短超过2倍,而且超过6倍,而不会影响模型质量。代码可在https://github.com/FMInference/DejaVu找到。
English
Large language models (LLMs) with hundreds of billions of parameters have
sparked a new wave of exciting AI applications. However, they are
computationally expensive at inference time. Sparsity is a natural approach to
reduce this cost, but existing methods either require costly retraining, have
to forgo LLM's in-context learning ability, or do not yield wall-clock time
speedup on modern hardware. We hypothesize that contextual sparsity, which are
small, input-dependent sets of attention heads and MLP parameters that yield
approximately the same output as the dense model for a given input, can address
these issues. We show that contextual sparsity exists, that it can be
accurately predicted, and that we can exploit it to speed up LLM inference in
wall-clock time without compromising LLM's quality or in-context learning
ability. Based on these insights, we propose DejaVu, a system that uses a
low-cost algorithm to predict contextual sparsity on the fly given inputs to
each layer, along with an asynchronous and hardware-aware implementation that
speeds up LLM inference. We validate that DejaVu can reduce the inference
latency of OPT-175B by over 2X compared to the state-of-the-art
FasterTransformer, and over 6X compared to the widely used Hugging Face
implementation, without compromising model quality. The code is available at
https://github.com/FMInference/DejaVu.