ChatPaper.aiChatPaper

在英特爾 GPU 上高效的 LLM 推論解決方案

Efficient LLM inference solution on Intel GPU

December 19, 2023
作者: Hui Wu, Yi Gan, Feng Yuan, Jing Ma, Wei Zhu, Yutao Xu, Hong Zhu, Yuhua Zhu, Xiaoli Liu, Jinghui Gu
cs.AI

摘要

基於Transformer的大型語言模型(LLMs)已被廣泛應用於許多領域,LLM推論的效率已成為實際應用中的熱門話題。然而,LLMs通常在模型結構上設計複雜,具有大量操作,並以自回歸模式執行推論,這使得設計高效系統成為一項具有挑戰性的任務。 在本文中,我們提出了一種具有低延遲和高吞吐量的高效LLM推論解決方案。首先,我們通過融合數據移動和逐元素操作來簡化LLM解碼器層,以降低內存訪問頻率並降低系統延遲。我們還提出了一種分段KV緩存策略,將請求和響應標記的關鍵/值保留在單獨的物理內存中,以進行有效的設備內存管理,有助於擴大運行時批量大小並提高系統吞吐量。我們設計了一個定制的Scaled-Dot-Product-Attention核心,以匹配我們基於分段KV緩存解決方案的融合策略。我們在Intel GPU上實現了我們的LLM推論解決方案並將其公開發布。與標準的HuggingFace實現相比,所提出的解決方案在Intel GPU上可實現高達7倍的較低標記延遲和27倍的更高吞吐量。
English
Transformer based Large Language Models (LLMs) have been widely used in many fields, and the efficiency of LLM inference becomes hot topic in real applications. However, LLMs are usually complicatedly designed in model structure with massive operations and perform inference in the auto-regressive mode, making it a challenging task to design a system with high efficiency. In this paper, we propose an efficient LLM inference solution with low latency and high throughput. Firstly, we simplify the LLM decoder layer by fusing data movement and element-wise operations to reduce the memory access frequency and lower system latency. We also propose a segment KV cache policy to keep key/value of the request and response tokens in separate physical memory for effective device memory management, helping enlarge the runtime batch size and improve system throughput. A customized Scaled-Dot-Product-Attention kernel is designed to match our fusion policy based on the segment KV cache solution. We implement our LLM inference solution on Intel GPU and publish it publicly. Compared with the standard HuggingFace implementation, the proposed solution achieves up to 7x lower token latency and 27x higher throughput for some popular LLMs on Intel GPU.
PDF111December 15, 2024