DiffuCoder:理解并优化用于代码生成的掩码扩散模型
DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation
June 25, 2025
作者: Shansan Gong, Ruixiang Zhang, Huangjie Zheng, Jiatao Gu, Navdeep Jaitly, Lingpeng Kong, Yizhe Zhang
cs.AI
摘要
扩散大语言模型(dLLMs)作为自回归(AR)模型的有力替代方案,因其去噪模型作用于整个序列而备受关注。dLLMs的全局规划与迭代优化特性在代码生成领域尤为突出。然而,当前针对代码生成的dLLMs训练与推理机制仍待深入探索。为揭示dLLMs的解码行为并释放其在编码中的潜力,我们系统性地研究了其去噪过程及强化学习(RL)方法。我们基于130B代码标记训练了一个7B参数的dLLM——DiffuCoder。以此模型为实验平台,我们分析了其解码行为,发现与AR模型相比的显著差异:(1)dLLMs无需依赖半自回归解码即可决定生成过程的因果性程度;(2)提高采样温度不仅丰富了词汇选择,还改变了生成顺序,这种多样性为RL探索提供了广阔的搜索空间。针对RL训练,为降低词汇对数似然估计的方差并保持训练效率,我们提出了耦合-GRPO,一种新颖的采样策略,通过构建互补的掩码噪声用于训练中的补全任务。实验表明,耦合-GRPO显著提升了DiffuCoder在代码生成基准上的表现(EvalPlus上提升+4.4%),并减少了解码过程中对AR因果性的依赖。本研究深入剖析了dLLM生成的内部机制,并提供了一个高效、原生于扩散模型的RL训练框架。https://github.com/apple/ml-diffucoder。
English
Diffusion large language models (dLLMs) are compelling alternatives to
autoregressive (AR) models because their denoising models operate over the
entire sequence. The global planning and iterative refinement features of dLLMs
are particularly useful for code generation. However, current training and
inference mechanisms for dLLMs in coding are still under-explored. To demystify
the decoding behavior of dLLMs and unlock their potential for coding, we
systematically investigate their denoising processes and reinforcement learning
(RL) methods. We train a 7B dLLM, DiffuCoder, on 130B tokens of code.
Using this model as a testbed, we analyze its decoding behavior, revealing how
it differs from that of AR models: (1) dLLMs can decide how causal their
generation should be without relying on semi-AR decoding, and (2) increasing
the sampling temperature diversifies not only token choices but also their
generation order. This diversity creates a rich search space for RL rollouts.
For RL training, to reduce the variance of token log-likelihood estimates and
maintain training efficiency, we propose coupled-GRPO, a novel
sampling scheme that constructs complementary mask noise for completions used
in training. In our experiments, coupled-GRPO significantly improves
DiffuCoder's performance on code generation benchmarks (+4.4\% on EvalPlus) and
reduces reliance on AR causal during decoding. Our work provides deeper insight
into the machinery of dLLM generation and offers an effective, diffusion-native
RL training framework. https://github.com/apple/ml-diffucoder.