ChatPaper.aiChatPaper

預訓練大型語言模型中的自適應層跳躍

Adaptive Layer-skipping in Pre-trained LLMs

March 31, 2025
作者: Xuan Luo, Weizhi Wang, Xifeng Yan
cs.AI

摘要

多種層跳躍方法已被提出,用以加速大型語言模型(LLMs)中的詞元生成。然而,這些方法忽略了一個根本性問題:在生成不同詞元時,計算需求如何變化?在本研究中,我們引入了FlexiDepth,這是一種動態調整Transformer層數的文本生成方法。通過整合一個插件路由器和適配器,FlexiDepth能夠在不修改LLMs原始參數的情況下實現自適應的層跳躍。將FlexiDepth應用於Llama-3-8B模型,實現了32層中的8層跳躍,同時保持了100%的基準性能。FlexiDepth的實驗結果表明,LLMs中的計算需求根據詞元類型顯著變化。具體而言,生成重複詞元或固定短語所需的層數較少,而涉及計算或不確定性較高的詞元生成則需要更多層。有趣的是,這種自適應分配模式與人類直覺相符。為推動該領域的研究,我們開源了FlexiDepth及記錄其層分配模式的數據集,供未來探索使用。
English
Various layer-skipping methods have been proposed to accelerate token generation in large language models (LLMs). However, they have overlooked a fundamental question: How do computational demands vary across the generation of different tokens? In this work, we introduce FlexiDepth, a method that dynamically adjusts the number of Transformer layers used in text generation. By incorporating a plug-in router and adapter, FlexiDepth enables adaptive layer-skipping in LLMs without modifying their original parameters. Introducing FlexiDepth to Llama-3-8B model achieves layer skipping of 8 layers out of 32, and meanwhile maintains the full 100\% benchmark performance. Experimental results with FlexiDepth demonstrate that computational demands in LLMs significantly vary based on token type. Specifically, generating repetitive tokens or fixed phrases requires fewer layers, whereas producing tokens involving computation or high uncertainty requires more layers. Interestingly, this adaptive allocation pattern aligns with human intuition. To advance research in this area, we open sourced FlexiDepth and a dataset documenting FlexiDepth's layer allocation patterns for future exploration.

Summary

AI-Generated Summary

PDF62April 3, 2025