ChatPaper.aiChatPaper

从下一词元到下一模块:扩散式大语言模型的原理性适配路径

From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs

December 7, 2025
作者: Yuchuan Tian, Yuchen Liang, Jiacheng Sun, Shuo Zhang, Guangwen Yang, Yingte Shu, Sibo Fang, Tianyu Guo, Kai Han, Chao Xu, Hanting Chen, Xinghao Chen, Yunhe Wang
cs.AI

摘要

大型语言模型(LLMs)在生成任务上表现出色,但主流的自回归解码方式具有固有的顺序性,形成了吞吐量瓶颈。扩散语言模型(DLMs)——尤其是分块变体——支持并行生成和块内双向推理,然而从头训练大型DLMs成本高昂,且浪费了成熟自回归检查点中的知识。此前的"适应"尝试要么通过修改逻辑值或随机扩展注意力掩码来实现全序列扩散,要么简单地将自回归权重移植到块扩散方案中,未能解决自回归因果性与块双向性之间的根本性错配。我们通过将自回归视为块大小=1的块扩散模型,将适应过程重新定义为从自回归到块扩散的范式内路径。具体而言,我们设计了包含以下要素的适应路径:使用上下文因果注意力掩码(上下文层面保持因果性,仅在活跃块内实现双向注意力)、高效的并行适应流程、最大化数据利用并保留预训练知识的辅助自回归损失函数,以及逐步增加生成块大小的策略。该方案与掩码块扩散模型无缝集成,并保持训练-推理一致性。基于这些组件构建的NBDiff-7B(基础版与指导版)能够继承长上下文建模和推理能力,在7B级扩散语言模型中实现最优性能,在通用知识、数学和代码基准测试上较基线模型取得显著提升。这些结果表明,基于原理的自回归到块扩散适应方法是一种计算高效且有效的替代方案,可避免从头训练扩散语言模型。代码地址:https://github.com/YuchuanTian/NBDiff。
English
Large language models (LLMs) excel at generation but dominant autoregressive (AR) decoding is inherently sequential, creating a throughput bottleneck. Diffusion Language Models (DLMs)--especially block-wise variants--enable parallel generation and intra-block bidirectional reasoning, yet training large DLMs from scratch is costly and wastes the knowledge in mature AR checkpoints. Prior "adaptation" attempts either modify logits or randomly grow attention masks to full-sequence diffusion, or simply transplant AR weights into a block-diffusion recipe, leaving a fundamental mismatch between AR causality and block-wise bidirectionality unaddressed. We reframe adaptation as a intra-paradigm path from AR to Block-Diffusion by viewing AR as Block-Diffusion with blocksize=1. Concretely, we design the pathway of adaptation as follows: we use a context-causal attention mask (causal in context, bidirectional only within the active block), an efficient parallel adaptation procedure, an auxiliary AR loss to maximize data utilization and retain pretrained knowledge, and gradual increment of the generation block size. The recipe integrates cleanly with masked block-diffusion and maintains train-inference consistency. Built on these components, NBDiff-7B (Base and Instruct) could inherit the long-context modeling and reasoning capabilities, and achieve state-of-the-art performance among the 7B-class DLMs, delivering strong gains on general-knowledge, math, and code benchmarks over strong baselines. These results demonstrate that principled AR-to-block-diffusion adaptation is an effective and compute-efficient alternative to training DLMs from scratch. Codes: https://github.com/YuchuanTian/NBDiff.
PDF31December 11, 2025