ChatPaper.aiChatPaper

Diffutron:面向土耳其语的掩码扩散语言模型

Diffutron: A Masked Diffusion Language Model for Turkish Language

March 20, 2026
作者: Şuayp Talha Kocabay, Talha Rüzgar Akkuş
cs.AI

摘要

掩码扩散语言模型(MDLM)作为标准大语言模型的一种引人注目的非自回归替代方案崭露头角,但其在形态丰富语言中的应用仍存在局限。本文提出专为土耳其语设计的掩码扩散语言模型Diffutron。我们采用资源高效的训练流程:首先基于大规模语料库对多语言编码器进行LoRA持续预训练,随后通过渐进式指令微调策略,依次在通用指令集和任务特定指令集上对模型进行适应性训练。综合基准测试表明,尽管模型规模紧凑,但相较于现有的数十亿参数基线模型仍表现出竞争优势。这些发现验证了掩码扩散建模结合多阶段调优策略在土耳其语非自回归文本生成中的有效性。
English
Masked Diffusion Language Models (MDLMs) have emerged as a compelling non-autoregressive alternative to standard large language models; however, their application to morphologically rich languages remains limited. In this paper, we introduce Diffutron, a masked diffusion language model specifically designed for Turkish. Our approach leverages a resource-efficient training pipeline, starting with LoRA-based continual pre-training of a multilingual encoder on a large-scale corpus. To enable generative capabilities, we employ a progressive instruction-tuning strategy, sequentially adapting the model on general and task-specific instruction sets. Experimental results across comprehensive benchmarks demonstrate that, despite its compact size, our model achieves competitive performance compared to existing multi-billion-parameter baselines. These findings validate the effectiveness of masked diffusion modeling combined with multi-stage tuning for non-autoregressive text generation in Turkish.
PDF31March 31, 2026