dInfer:一种高效的扩散语言模型推理框架
dInfer: An Efficient Inference Framework for Diffusion Language Models
October 9, 2025
作者: Yuxin Ma, Lun Du, Lanning Wei, Kun Chen, Qian Xu, Kangyu Wang, Guofeng Feng, Guoshan Lu, Lin Liu, Xiaojing Qi, Xinyuan Zhang, Zhen Tao, Haibo Feng, Ziyun Jiang, Ying Xu, Zenan Huang, Yihong Zhuang, Haokai Xu, Jiaqi Hu, Zhenzhong Lan, Junbo Zhao, Jianguo Li, Da Zheng
cs.AI
摘要
基於擴散的大型語言模型(dLLMs)作為自迴歸(AR)LLMs的一種有前景的替代方案,利用去噪生成實現了內在的並行性。儘管越來越多的開源dLLM模型湧現,但其廣泛應用仍因缺乏標準化且高效的推理框架而受限。我們提出了dInfer,一個高效且可擴展的dLLM推理框架。dInfer將推理管道分解為四個模塊化組件——模型、擴散迭代管理器、解碼策略和KV緩存管理器——並為每個組件集成新算法,同時進行系統級優化。通過這種算法創新與系統增強相結合的方式,dInfer在LLaDA-MoE上實現了顯著的效率提升,且不影響輸出質量。在批量大小為1的情況下,其在HumanEval上超過每秒1,100個令牌,並在8塊H800 GPU上跨六個基準測試平均超過每秒800個令牌。與之前系統相比,dInfer在保持相似模型性能的同時,比Fast-dLLM快10倍。即使與使用最新vLLM推理引擎高度優化的AR模型(具有可比激活參數數量和性能)QWen2.5-3B相比,dInfer仍能提供2-3倍的加速。dInfer的實現已開源於https://github.com/inclusionAI/dInfer。
English
Diffusion-based large language models (dLLMs) have emerged as a promising
alternative to autoregressive (AR) LLMs, leveraging denoising-based generation
to enable inherent parallelism. Even more and more open-sourced dLLM models
emerge, yet their widespread adoption remains constrained by the lack of a
standardized and efficient inference framework. We present dInfer, an efficient
and extensible framework for dLLM inference. dInfer decomposes the inference
pipeline into four modular components--model, diffusion iteration manager,
decoding strategy, and KV-cache manager--and integrates novel algorithms for
each component alongside system-level optimizations. Through this combination
of algorithmic innovations and system enhancements, dInfer achieves substantial
efficiency gains without compromising output quality on LLaDA-MoE. At batch
size 1, it surpasses 1,100 tokens per second on HumanEval and averages over 800
tokens per second across six benchmarks on 8times H800 GPUs. Compared to
prior systems, dInfer delivers a 10times speedup over Fast-dLLM while
maintaining similar model performance. Even compared to the AR model (with a
comparable number of activation parameters and performance) QWen2.5-3B, which
is highly optimized with the latest vLLM inference engine, dInfer still
delivers a 2-3times speedup. The implementation of dInfer is open-sourced
at https://github.com/inclusionAI/dInfer.