ChatPaper.aiChatPaper

序列扩散语言模型

Sequential Diffusion Language Models

September 28, 2025
作者: Yangzhou Liu, Yue Cao, Hao Li, Gen Luo, Zhe Chen, Weiyun Wang, Xiaobo Liang, Biqing Qi, Lijun Wu, Changyao Tian, Yanting Zhang, Yuqiang Li, Tong Lu, Yu Qiao, Jifeng Dai, Wenhai Wang
cs.AI

摘要

扩散语言模型(DLMs)虽具备理论上的高效性,却受限于固定长度解码及与键值(KV)缓存的不兼容性。区块扩散虽缓解了这些问题,但仍强制采用固定区块大小,且需昂贵的训练成本。我们引入下一序列预测(NSP),它统一了下一词元与下一区块的预测,使模型能在每一步自适应地决定生成长度。当长度固定为1时,NSP即退化为标准的下一词元预测。基于NSP,我们提出了序列扩散语言模型(SDLM),它能以极低成本适配预训练的自回归语言模型(ALMs)。具体而言,SDLM在固定大小的掩码区块内执行扩散推理,但根据模型置信度动态解码连续子序列,从而保持与KV缓存的兼容性,并提升对序列中变化不确定性和语义的鲁棒性。实验表明,SDLM仅需350万训练样本即可匹敌或超越强大的自回归基线,同时实现比Qwen-2.5高出2.1倍的吞吐量。尤为突出的是,SDLM-32B模型展现出更为显著的效率提升,彰显了我们建模范式强大的可扩展潜力。项目页面与代码:https://github.com/OpenGVLab/SDLM。
English
Diffusion language models (DLMs) have strong theoretical efficiency but are limited by fixed-length decoding and incompatibility with key-value (KV) caches. Block diffusion mitigates these issues, yet still enforces a fixed block size and requires expensive training. We introduce Next Sequence Prediction (NSP), which unifies next-token and next-block prediction, enabling the model to adaptively determine the generation length at each step. When the length is fixed to 1, NSP reduces to standard next-token prediction. Building on NSP, we propose Sequential Diffusion Language Model (SDLM), which can retrofit pre-trained autoregressive language models (ALMs) at minimal cost. Specifically, SDLM performs diffusion inference within fixed-size mask blocks, but dynamically decodes consecutive subsequences based on model confidence, thereby preserving KV-cache compatibility and improving robustness to varying uncertainty and semantics across the sequence. Experiments show that SDLM matches or surpasses strong autoregressive baselines using only 3.5M training samples, while achieving 2.1 higher throughput than Qwen-2.5. Notably, the SDLM-32B model delivers even more pronounced efficiency gains, demonstrating the strong scalability potential of our modeling paradigm. Project page and codes: https://github.com/OpenGVLab/SDLM
PDF222September 30, 2025