ChatPaper.aiChatPaper

通過位置插值擴展大型語言模型的上下文窗口

Extending Context Window of Large Language Models via Positional Interpolation

June 27, 2023
作者: Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian
cs.AI

摘要

我們提出了位置插值(PI),擴展了基於RoPE預訓練LLM(如LLaMA模型)的上下文窗口大小,最多可達32768,而只需進行最少的微調(在1000步內),同時在需要長上下文的各種任務上展現出強大的實證結果,包括密鑰檢索、語言建模,以及從LLaMA 7B到65B的長文檔摘要。與此同時,位置插值擴展的模型在其原始上下文窗口範圍內相對保持了良好的質量。為了實現這一目標,位置插值將輸入位置索引線性縮小,以匹配原始上下文窗口大小,而不是超出訓練過的上下文長度,這可能導致災難性高的注意力分數,徹底破壞自注意機制。我們的理論研究顯示,插值的上限至少比外推的上限小約600倍,進一步證明了其穩定性。通過位置插值擴展的模型保留其原始架構,並且可以重用大多數現有的優化和基礎設施。
English
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B. Meanwhile, the extended model by Position Interpolation preserve quality relatively well on tasks within its original context window. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least sim 600 times smaller than that of extrapolation, further demonstrating its stability. Models extended via Position Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.
PDF536December 15, 2024