dInfer:面向扩散语言模型的高效推理框架
dInfer: An Efficient Inference Framework for Diffusion Language Models
October 9, 2025
作者: Yuxin Ma, Lun Du, Lanning Wei, Kun Chen, Qian Xu, Kangyu Wang, Guofeng Feng, Guoshan Lu, Lin Liu, Xiaojing Qi, Xinyuan Zhang, Zhen Tao, Haibo Feng, Ziyun Jiang, Ying Xu, Zenan Huang, Yihong Zhuang, Haokai Xu, Jiaqi Hu, Zhenzhong Lan, Junbo Zhao, Jianguo Li, Da Zheng
cs.AI
摘要
基于扩散的大型语言模型(dLLMs)已成为自回归(AR)LLMs的有力替代方案,其利用去噪生成机制实现了内在的并行性。尽管越来越多的开源dLLM模型涌现,但由于缺乏标准化且高效的推理框架,它们的广泛应用仍受到限制。我们推出了dInfer,一个高效且可扩展的dLLM推理框架。dInfer将推理流程分解为四个模块化组件——模型、扩散迭代管理器、解码策略和KV缓存管理器——并集成了针对每个组件的新算法以及系统级优化。通过这种算法创新与系统增强的结合,dInfer在LLaDA-MoE上实现了显著的效率提升,同时不牺牲输出质量。在批处理大小为1的情况下,它在HumanEval上每秒处理超过1,100个token,并在8块H800 GPU上,在六个基准测试中平均每秒处理超过800个token。与现有系统相比,dInfer在保持相似模型性能的同时,比Fast-dLLM快了10倍。即便是与采用最新vLLM推理引擎高度优化、激活参数数量及性能相当的AR模型QWen2.5-3B相比,dInfer仍能实现2至3倍的加速。dInfer的实现已开源,地址为https://github.com/inclusionAI/dInfer。
English
Diffusion-based large language models (dLLMs) have emerged as a promising
alternative to autoregressive (AR) LLMs, leveraging denoising-based generation
to enable inherent parallelism. Even more and more open-sourced dLLM models
emerge, yet their widespread adoption remains constrained by the lack of a
standardized and efficient inference framework. We present dInfer, an efficient
and extensible framework for dLLM inference. dInfer decomposes the inference
pipeline into four modular components--model, diffusion iteration manager,
decoding strategy, and KV-cache manager--and integrates novel algorithms for
each component alongside system-level optimizations. Through this combination
of algorithmic innovations and system enhancements, dInfer achieves substantial
efficiency gains without compromising output quality on LLaDA-MoE. At batch
size 1, it surpasses 1,100 tokens per second on HumanEval and averages over 800
tokens per second across six benchmarks on 8times H800 GPUs. Compared to
prior systems, dInfer delivers a 10times speedup over Fast-dLLM while
maintaining similar model performance. Even compared to the AR model (with a
comparable number of activation parameters and performance) QWen2.5-3B, which
is highly optimized with the latest vLLM inference engine, dInfer still
delivers a 2-3times speedup. The implementation of dInfer is open-sourced
at https://github.com/inclusionAI/dInfer.