ChatPaper.aiChatPaper

在 CPU 上優化大型語言模型的推論性能

Inference Performance Optimization for Large Language Models on CPUs

July 10, 2024
作者: Pujiang He, Shan Zhou, Wenhuan Huang, Changqing Li, Duyi Wang, Bin Guo, Chen Meng, Sheng Gui, Weifei Yu, Yi Xie
cs.AI

摘要

大型語言模型(LLMs)展現出卓越的性能和廣泛的潛力,適用於各種任務。然而,在低資源環境中部署性能優異的LLMs已引起業界的重視。當GPU硬體資源有限時,我們可以在CPU上探索替代方案。為了減輕財務負擔並緩解硬體資源帶來的限制,優化推論性能是必要的。本文介紹了一種易於部署的推論性能優化解決方案,旨在加速在CPU上運行的LLMs。在這個解決方案中,我們實現了一種有效的方法來減少KV緩存大小,同時確保精度。我們提出了一種分佈式推論優化方法,並基於oneAPI Collective Communications Library實現了它。此外,我們提出了針對CPU上LLMs的優化方法,並針對最常用的模型進行了定制優化。代碼已在https://github.com/intel/xFasterTransformer上開源。
English
Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.

Summary

AI-Generated Summary

PDF547November 28, 2024