ChatPaper.aiChatPaper

pi-Flow:基於策略的模仿蒸餾實現少步生成

pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

October 16, 2025
作者: Hansheng Chen, Kai Zhang, Hao Tan, Leonidas Guibas, Gordon Wetzstein, Sai Bi
cs.AI

摘要

基于少步扩散或流的生成模型通常将预测速度的教师模型蒸馏为预测去噪数据捷径的学生模型。这种格式不匹配导致了复杂的蒸馏过程,往往在质量与多样性之间难以权衡。为解决这一问题,我们提出了基于策略的流模型(pi-Flow)。pi-Flow通过修改学生流模型的输出层,使其在某一时间步预测一个无需网络的策略。该策略随后在未来的子步中生成动态流速度,且开销极小,从而在这些子步上实现快速且准确的常微分方程(ODE)积分,而无需额外的网络评估。为使策略的ODE轨迹与教师模型相匹配,我们引入了一种新颖的模仿蒸馏方法,该方法利用标准的ell_2流匹配损失,沿策略轨迹将策略的速度与教师模型的速度对齐。通过简单地模仿教师模型的行为,pi-Flow实现了稳定且可扩展的训练,并避免了质量与多样性之间的权衡。在ImageNet 256^2数据集上,pi-Flow以1-NFE的FID值达到2.85,超越了相同DiT架构下的MeanFlow。在FLUX.1-12B和Qwen-Image-20B数据集上,pi-Flow在4 NFEs时实现了比现有最先进的少步方法显著更好的多样性,同时保持了教师模型级别的质量。
English
Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models (pi-Flow). pi-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard ell_2 flow matching loss. By simply mimicking the teacher's behavior, pi-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256^2, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, pi-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.
PDF72October 17, 2025