ChatPaper.aiChatPaper

语言模型中的空白

Void in Language Models

May 20, 2025
作者: Mani Shemiranifar
cs.AI

摘要

尽管基于Transformer的语言模型(LMs)取得了进展,但一个根本性问题仍未得到充分解答:在推理过程中,所有层是否都被激活?我们通过检测未激活层(我们称之为“空洞”)来探究这一问题,采用了一种无需训练且无参数的自适应计算方法——L2自适应计算(LAC)。我们将LAC从其最初关注效率的应用中调整,用于追踪推理过程中的激活层。该方法通过监控激活的L2范数变化来识别空洞。我们分析了指令调优LMs在两个阶段的层激活情况:提示处理(PP)阶段,追踪输入提示中每个token的激活层;以及响应生成(RG)阶段,追踪生成每个token时的激活层。我们进一步证明,在这两个阶段中激活的是不同的层。为了展示我们方法的有效性,我们在三个基准测试(MMLU、GPQA Diamond和BoolQ)上评估了来自Llama、Mistral和Qwen家族的三种不同指令调优LMs。例如,在零样本设置的MMLU测试中,跳过Qwen2.5-7B-Instruct的空洞层,性能从69.24提升至71.29,而模型仅使用了30%的层。同样,在GPQA Diamond测试中,Mistral-7B-Instruct-v0.3在PP和RG阶段使用70%的层时,性能从13.88提升至18.36。这些结果表明,并非所有层在推理过程中都同等重要,有选择性地跳过大部分层可以在某些任务上提升模型性能。
English
Despite advances in transformer-based language models (LMs), a fundamental question remains largely unanswered: Are all layers activated during inference? We investigate this question by detecting unactivated layers (which we refer to as Voids) using a non-trainable and parameter-free adaptive computation method called L2 Adaptive Computation (LAC). We adapt LAC from its original efficiency-focused application to trace activated layers during inference. This method monitors changes in the L2-norm of activations to identify voids. We analyze layer activation in instruction-tuned LMs across two phases: Prompt Processing (PP), where we trace activated layers for each token in the input prompts, and Response Generation (RG), where we trace activated layers for each generated token. We further demonstrate that distinct layers are activated during these two phases. To show the effectiveness of our method, we evaluated three distinct instruction-tuned LMs from the Llama, Mistral, and Qwen families on three benchmarks: MMLU, GPQA Diamond, and BoolQ. For example, on MMLU with a zero-shot setting, skipping voids in Qwen2.5-7B-Instruct resulted in an improvement from 69.24 to 71.29 while the model uses only 30% of the layers. Similarly, Mistral-7B-Instruct-v0.3 on GPQA Diamond improved from 13.88 to 18.36 when using 70% of the layers during both the PP and RG phases. These results show that not all layers contribute equally during inference, and that selectively skipping most of them can improve the performance of models on certain tasks.

Summary

AI-Generated Summary

PDF02May 21, 2025