扩散语言模型的感知下沉剪枝法
Sink-Aware Pruning for Diffusion Language Models
February 19, 2026
作者: Aidar Myrzakhan, Tianyi Li, Bowei Guo, Shengkun Tang, Zhiqiang Shen
cs.AI
摘要
扩散语言模型(DLM)因迭代去噪过程导致推理成本高昂,催生了高效剪枝的需求。现有剪枝启发式方法大多沿袭自自回归(AR)大语言模型,通常会保留注意力汇聚标记,因为AR模型的汇聚标记充当着稳定的全局锚点。我们通过实验证明这一假设并不适用于DLM:在整个生成轨迹中(通过主导汇聚位置在时间步间的偏移程度衡量),注意力汇聚位置表现出显著更高的方差,表明DLM中的汇聚标记往往具有瞬时性,其结构重要性远低于AR模型。基于此发现,我们提出**汇聚感知剪枝法**,能够自动识别并剪除DLM中不稳定的汇聚标记(既往研究通常为AR大语言模型保留汇聚标记)。无需重新训练,本方法在匹配计算量下实现了更优的质量-效率平衡,并超越了现有强基准剪枝方法。代码已开源:https://github.com/VILA-Lab/Sink-Aware-Pruning。
English
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose {bf Sink-Aware Pruning}, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.