学习扩散语言模型的去掩码策略
Learning Unmasking Policies for Diffusion Language Models
December 9, 2025
作者: Metod Jazbec, Theo X. Olausson, Louis Béthune, Pierre Ablin, Michael Kirchhof, Joao Monterio, Victor Turrisi, Jason Ramapuram, Marco Cuturi
cs.AI
摘要
扩散式(大型)语言模型(dLLMs)目前在多项任务的下游性能上已能与自回归模型相媲美,同时具备推理效率更高的潜力。其中特别成功的变体是掩码离散扩散模型,该模型通过将填充特殊掩码符的缓冲区逐步替换为从模型词汇表中采样的标记来实现生成。通过并行解掩多个标记可提升效率,但一次性解掩过多标记会降低生成质量。因此,dLLMs的关键设计环节在于采样流程——即在扩散过程的每一步选择需要替换的标记。最新研究发现,与随机解掩相比,采用置信度阈值等启发式策略能同时提升生成质量和标记吞吐量。但此类启发式方法存在缺陷:需要人工调参,且我们观察到其性能会随缓冲区规模扩大而下降。本研究转而提出使用强化学习训练采样流程。具体而言,我们将掩码扩散采样形式化为马尔可夫决策过程,其中dLLM作为环境载体,并设计基于单层Transformer的轻量级策略架构,将dLLM标记置信度映射至解掩决策。实验表明,经训练的采样策略与半自回归生成结合时能达到顶尖启发式方法的性能,在完整扩散场景下更胜一筹。我们还检验了策略的可迁移性,发现其能泛化至新的底层dLLM及更长序列。但同时也观察到策略在跨领域数据上性能会下降,且通过我们的方法难以实现精度-效率权衡的精细化调节。
English
Diffusion (Large) Language Models (dLLMs) now match the downstream performance of their autoregressive counterparts on many tasks, while holding the promise of being more efficient during inference. One particularly successful variant is masked discrete diffusion, in which a buffer filled with special mask tokens is progressively replaced with tokens sampled from the model's vocabulary. Efficiency can be gained by unmasking several tokens in parallel, but doing too many at once risks degrading the generation quality. Thus, one critical design aspect of dLLMs is the sampling procedure that selects, at each step of the diffusion process, which tokens to replace. Indeed, recent work has found that heuristic strategies such as confidence thresholding lead to both higher quality and token throughput compared to random unmasking. However, such heuristics have downsides: they require manual tuning, and we observe that their performance degrades with larger buffer sizes. In this work, we instead propose to train sampling procedures using reinforcement learning. Specifically, we formalize masked diffusion sampling as a Markov decision process in which the dLLM serves as the environment, and propose a lightweight policy architecture based on a single-layer transformer that maps dLLM token confidences to unmasking decisions. Our experiments show that these trained policies match the performance of state-of-the-art heuristics when combined with semi-autoregressive generation, while outperforming them in the full diffusion setting. We also examine the transferability of these policies, finding that they can generalize to new underlying dLLMs and longer sequence lengths. However, we also observe that their performance degrades when applied to out-of-domain data, and that fine-grained tuning of the accuracy-efficiency trade-off can be challenging with our approach.