潜在精炼解码:通过精炼信念状态增强基于扩散的语言模型
Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States
October 13, 2025
作者: Qinglin Zhu, Yizhen Yao, Runcong Zhao, Yanzheng Xiang, Amrutha Saseendran, Chen Jin, Philip Alexander Teare, Bin Liang, Yulan He, Lin Gui
cs.AI
摘要
自回归(AR)模型仍是自然语言生成的基准方法,但由于其严格的顺序解码特性,仍面临高延迟问题。近期受扩散模型启发的LlaDA和Dream等方法通过并行生成缓解了这一问题,但它们存在两个核心局限:信息丢失,即每一步中未确定词元的预测分布被丢弃;以及过早决策,即在缺乏全局协调的情况下做出局部决定。我们提出了潜在精炼解码(LRD),这是一个包含潜在精炼和预测反馈循环的两阶段框架。第一阶段将掩码位置保持为预测词元与掩码嵌入的分布混合,使模型能够建立更为全局一致的信念。第二阶段逐步确定置信度高的词元,同时保留不确定词元以进行迭代反馈。KL散度动态为收敛和早停提供了原则性且可靠的准则。在编码(HumanEval +6.3,MBPP +2.6)和推理(GSM8K +2.9,MATH500 +3.8)任务上的实验表明,LRD在提升准确率的同时,实现了高达10.6倍的加速,使其成为并行序列生成的一个强大且多功能的替代方案。
English
Autoregressive (AR) models remain the standard for natural language
generation but still suffer from high latency due to strictly sequential
decoding. Recent diffusion-inspired approaches, such as LlaDA and Dream,
mitigate this by generating in parallel, yet they suffer from two core
limitations: information loss, as predictive distributions for non-finalized
tokens are discarded at each step, and premature commitment, where local
decisions are made without sufficient global coordination. We introduce Latent
Refinement Decoding (LRD), a two-stage framework with Latent Refinement and a
Predictive Feedback Loop. The first stage maintains masked positions as
distributional mixtures of predicted tokens and the mask embedding, allowing
the model to establish more globally consistent beliefs. The second stage
progressively finalizes confident tokens while retaining uncertain ones for
iterative feedback. KL-divergence dynamics provide a principled and reliable
criterion for convergence and early stopping. Experiments across coding
(HumanEval +6.3, MBPP +2.6) and reasoning (GSM8K +2.9, MATH500 +3.8) show that
LRD improves accuracy while delivering speedups of up to 10.6x, making it a
strong and versatile alternative for parallel sequence generation.