ChatPaper.aiChatPaper

潛在精煉解碼:通過精煉信念狀態來增強基於擴散的語言模型

Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States

October 13, 2025
作者: Qinglin Zhu, Yizhen Yao, Runcong Zhao, Yanzheng Xiang, Amrutha Saseendran, Chen Jin, Philip Alexander Teare, Bin Liang, Yulan He, Lin Gui
cs.AI

摘要

自回歸(AR)模型仍然是自然語言生成的標準方法,但由於嚴格的序列解碼,仍然存在高延遲的問題。最近受擴散模型啟發的方法,如LlaDA和Dream,通過並行生成來緩解這一問題,但它們面臨兩個核心限制:信息丟失,因為在每一步中未最終確定的預測分佈被丟棄;以及過早承諾,即在沒有充分全局協調的情況下做出局部決策。我們引入了潛在精煉解碼(LRD),這是一個包含潛在精煉和預測反饋循環的兩階段框架。第一階段將掩碼位置保持為預測詞元和掩碼嵌入的分佈混合,使模型能夠建立更全局一致的信念。第二階段逐步確定自信的詞元,同時保留不確定的詞元以進行迭代反饋。KL散度動態為收斂和早期停止提供了原則性和可靠的標準。在編碼(HumanEval +6.3,MBPP +2.6)和推理(GSM8K +2.9,MATH500 +3.8)的實驗中,LRD在提高準確性的同時,實現了高達10.6倍的加速,使其成為並行序列生成的一個強大且多功能的替代方案。
English
Autoregressive (AR) models remain the standard for natural language generation but still suffer from high latency due to strictly sequential decoding. Recent diffusion-inspired approaches, such as LlaDA and Dream, mitigate this by generating in parallel, yet they suffer from two core limitations: information loss, as predictive distributions for non-finalized tokens are discarded at each step, and premature commitment, where local decisions are made without sufficient global coordination. We introduce Latent Refinement Decoding (LRD), a two-stage framework with Latent Refinement and a Predictive Feedback Loop. The first stage maintains masked positions as distributional mixtures of predicted tokens and the mask embedding, allowing the model to establish more globally consistent beliefs. The second stage progressively finalizes confident tokens while retaining uncertain ones for iterative feedback. KL-divergence dynamics provide a principled and reliable criterion for convergence and early stopping. Experiments across coding (HumanEval +6.3, MBPP +2.6) and reasoning (GSM8K +2.9, MATH500 +3.8) show that LRD improves accuracy while delivering speedups of up to 10.6x, making it a strong and versatile alternative for parallel sequence generation.
PDF502October 14, 2025