ChatPaper.aiChatPaper

LongRoPE2:近乎无损的LLM上下文窗口擴展

LongRoPE2: Near-Lossless LLM Context Window Scaling

February 27, 2025
作者: Ning Shang, Li Lyna Zhang, Siyuan Wang, Gaokai Zhang, Gilsinia Lopez, Fan Yang, Weizhu Chen, Mao Yang
cs.AI

摘要

LongRoPE2 是一種創新方法,旨在將預訓練大型語言模型(LLMs)的有效上下文窗口擴展至目標長度,同時保持其在原有較短上下文窗口上的性能。這一成果基於三項貢獻:(1) 提出假設,認為現有方法中觀察到的持續分佈外(OOD)問題,部分源於高維RoPE訓練不足;(2) 開發了一種有效的RoPE重縮放算法,採用由“針驅動”困惑度引導的進化搜索,以解決訓練不足的問題;(3) 引入混合上下文窗口訓練策略,通過微調模型權重,使其既能適應重縮放後的RoPE處理長上下文序列,又能保留使用原始RoPE時的短上下文性能。在LLaMA3-8B和Phi3-mini-3.8B模型上進行的廣泛實驗,跨越多個基準測試,驗證了這一假設並展示了LongRoPE2的有效性。值得注意的是,LongRoPE2僅使用10B標記,便將LLaMA3-8B的有效上下文長度擴展至128K,同時保持超過98.5%的短上下文性能,這比Meta的方法所需標記量少80倍,且後者未能達到目標有效上下文長度。代碼將公開於https://github.com/microsoft/LongRoPE。
English
LongRoPE2 is a novel approach that extends the effective context window of pre-trained large language models (LLMs) to the target length, while preserving the performance on the original shorter context window. This is achieved by three contributions: (1) a hypothesis that insufficient training in higher RoPE dimensions contributes to the persistent out-of-distribution (OOD) issues observed in existing methods; (2) an effective RoPE rescaling algorithm that adopts evolutionary search guided by "needle-driven" perplexity to address the insufficient training problem; (3) a mixed context window training approach that fine-tunes model weights to adopt rescaled RoPE for long-context sequences while preserving the short-context performance with the original RoPE. Extensive experiments on LLaMA3-8B and Phi3-mini-3.8B across various benchmarks validate the hypothesis and demonstrate the effectiveness of LongRoPE2. Remarkably, LongRoPE2 extends LLaMA3-8B to achieve a 128K effective context length while retaining over 98.5% of short-context performance, using only 10B tokens -- 80x fewer than Meta's approach, which fails to reach the target effective context length. Code will be available at https://github.com/microsoft/LongRoPE.

Summary

AI-Generated Summary

PDF382February 28, 2025