ChatPaper.aiChatPaper

直接多令牌解码

Direct Multi-Token Decoding

October 13, 2025
作者: Xuan Luo, Weizhi Wang, Xifeng Yan
cs.AI

摘要

仅解码器(Decoder-only)的Transformer架构因其卓越性能已成为大型语言模型(LLMs)的标准配置。近期研究表明,在预训练的LLMs中,早期、中期和晚期层可能承担着不同的功能:早期层专注于理解输入上下文,中期层处理特定任务,而晚期层则将抽象表示转化为输出词元。我们提出假设,一旦表示经过早期和中期层的处理,所得隐藏状态可能已蕴含足够信息,仅通过晚期层即可支持生成多个词元,从而无需反复遍历早期和中期层。我们将这一推理范式称为直接多词元解码(Direct Multi-Token Decoding, DMTD)。与推测性解码不同,我们的方法无需引入额外参数、辅助程序或生成后验证。尽管在有限数据集上训练,经过微调的DMTD Qwen3-4B模型已展现出令人鼓舞的成果,实现了最高2倍的加速,且性能损失微乎其微。此外,我们的规模分析显示,随着训练数据集的扩大,其性能有望进一步提升。
English
Decoder-only transformers have become the standard architecture for large language models (LLMs) due to their strong performance. Recent studies suggest that, in pre-trained LLMs, early, middle, and late layers may serve distinct roles: Early layers focus on understanding the input context, middle layers handle task-specific processing, and late layers convert abstract representations into output tokens. We hypothesize that once representations have been processed by the early and middle layers, the resulting hidden states may encapsulate sufficient information to support the generation of multiple tokens using only the late layers, eliminating the need to repeatedly traverse the early and middle layers. We refer to this inference paradigm as Direct Multi-Token Decoding (DMTD). Unlike speculative decoding, our method introduces no additional parameters, auxiliary routines, or post-generation verification. Despite being trained on a limited dataset, a fine-tuned DMTD Qwen3-4B model has already demonstrated promising results, achieving up to a 2x speedup with only minor performance loss. Moreover, as shown in our scaling analysis, its performance is expected to further improve with larger training datasets.
PDF53October 16, 2025