ChatPaper.aiChatPaper

LLM 可能是 LongLM:自我擴展 LLM 上下文窗口而無需調整

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

January 2, 2024
作者: Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
cs.AI

摘要

本研究揭示了大型語言模型(LLMs)在無需微調的情況下處理長文本的固有能力。在訓練期間訓練序列的長度限制可能會限制大型語言模型(LLMs)對於推論中長輸入序列的應用。在本研究中,我們認為現有的LLMs本身具有處理長文本的固有能力。基於這一觀點,我們建議通過擴展LLMs的上下文窗口來充分利用這種固有能力。我們提出了自我擴展(Self-Extend)來激發LLMs處理長文本的潛力。基本思想是構建雙層注意力信息:組級別和鄰居級別。這兩個級別是通過原始模型的自注意力計算的,這意味著所提出的方法不需要任何訓練。通過僅需四行程式碼修改,所提出的方法可以輕鬆擴展現有LLMs的上下文窗口,而無需進行任何微調。我們進行了全面的實驗,結果顯示所提出的方法可以有效地擴展現有LLMs上下文窗口的長度。
English
This work elicits LLMs' inherent ability to handle long contexts without fine-tuning. The limited length of the training sequence during training may limit the application of Large Language Models (LLMs) on long input sequences for inference. In this work, we argue that existing LLMs themselves have inherent capabilities for handling long contexts. Based on this argument, we suggest extending LLMs' context window by themselves to fully utilize the inherent ability.We propose Self-Extend to stimulate LLMs' long context handling potential. The basic idea is to construct bi-level attention information: the group level and the neighbor level. The two levels are computed by the original model's self-attention, which means the proposed does not require any training. With only four lines of code modification, the proposed method can effortlessly extend existing LLMs' context window without any fine-tuning. We conduct comprehensive experiments and the results show that the proposed method can effectively extend existing LLMs' context window's length.
PDF283December 15, 2024