ChatPaper.aiChatPaper

基于进度感知置信度调度的快速解码扩散语言模型

Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules

December 2, 2025
作者: Amr Mohamed, Yang Zhang, Michalis Vazirgiannis, Guokan Shang
cs.AI

摘要

扩散大语言模型(dLLMs)为自回归模型提供了一种前景广阔的替代方案,但其缓慢的迭代采样严重制约了实际应用。我们提出SchED——一种无需训练、模型无关的提前退出算法,通过聚合全跨度对数边际值,并在达到平滑的进度相关置信度阈值时停止解码。我们在两个dLLM系列(Dream和LLaDA)上评估了SchED,涵盖基础版和指令调优版变体,测试范围包括十项基准任务,涉及多选题问答、数学推理、长文本问答/摘要及翻译等下游任务。SchED实现了显著且稳定的加速效果:在指令调优模型上,平均加速比达3.8-4.0倍,同时保持99.8-100%的基线得分;在基础模型上,SchED以99.1-100%的性能保留率实现持续加速,在更激进设置下最高可达2.34倍。采用对质量损失严苛惩罚的保守速度指标(QPS, γ=4),我们证明SchED具有强鲁棒性,明显优于先前基于置信度的提前退出方法(后者在长文本生成任务中失效)。对模型令牌预测的熵分析表明,指令调优会加速预测熵的衰减。通过将真实的置信度稳定转化为计算效率提升,SchED显著提高了dLLM解码的效率。
English
Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves 3.8-4.0times speedups while retaining 99.8-100% of the baseline score on average. On base models, SchED yields consistent speedup gains with 99.1-100% performance retention, with up to 2.34times under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, γ{=}4), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.
PDF92December 13, 2025