ChatPaper.aiChatPaper

InfiniteHiP:在單個 GPU 上將語言模型上下文擴展至 3 百萬個標記

InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU

February 13, 2025
作者: Heejun Lee, Geon Park, Jaduk Suh, Sung Ju Hwang
cs.AI

摘要

在現代大型語言模型(LLMs)中,處理非常長的上下文長度會帶來重大挑戰,因為這會導致推理速度變慢並增加記憶成本。此外,大多數現有的預訓練LLMs無法推廣到超出其原始訓練序列長度的範圍。為了實現有效且實用的長上下文利用,我們引入了InfiniteHiP,一種新穎且實用的LLM推理框架,通過模塊化的分層標記修剪算法動態地消除無關的上下文標記,從而加快處理速度。我們的方法還允許根據LLMs內部注意力模式選擇性地應用各種RoPE調整方法,從而實現對更長序列的泛化。此外,在推理過程中,我們將關鍵-值緩存卸載到主機內存中,顯著減少了GPU內存壓力。因此,InfiniteHiP使得單個L40s 48GB GPU能夠處理高達3百萬個標記,比原來大3倍,而且不會永久丟失上下文信息。我們的框架實現了對於100萬個標記上下文的18.95倍注意力解碼加速,而無需進行額外的訓練。我們在SGLang框架中實現了我們的方法,並通過廣泛的評估展示了其有效性和實用性。
English
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.

Summary

AI-Generated Summary

PDF1496February 14, 2025