AlphaFlow:均值流模型的理解与优化
AlphaFlow: Understanding and Improving MeanFlow Models
October 23, 2025
作者: Huijie Zhang, Aliaksandr Siarohin, Willi Menapace, Michael Vasilkovsky, Sergey Tulyakov, Qing Qu, Ivan Skorokhodov
cs.AI
摘要
MeanFlow作为一种从零开始训练的少步生成建模框架近期崭露头角,但其成功机制尚未被完全理解。本研究发现,MeanFlow目标函数可自然分解为轨迹流匹配与轨迹一致性两个部分。通过梯度分析,我们发现这两项存在强烈负相关性,导致优化冲突与收敛缓慢。基于此,我们提出alpha-Flow——一个将轨迹流匹配、Shortcut模型和MeanFlow统一于同一公式的广义目标函数族。通过采用从轨迹流匹配平滑过渡至MeanFlow的课程学习策略,alpha-Flow成功解耦了冲突目标并实现更优收敛。在类条件ImageNet-1K 256x256数据集上使用原始DiT主干网络进行从零训练时,alpha-Flow在不同规模与设置下均稳定超越MeanFlow。我们最大的alpha-Flow-XL/2+模型采用原始DiT主干网络取得了最新最优结果:FID指标在1-NFE和2-NFE下分别达到2.58和2.15。
English
MeanFlow has recently emerged as a powerful framework for few-step generative
modeling trained from scratch, but its success is not yet fully understood. In
this work, we show that the MeanFlow objective naturally decomposes into two
parts: trajectory flow matching and trajectory consistency. Through gradient
analysis, we find that these terms are strongly negatively correlated, causing
optimization conflict and slow convergence. Motivated by these insights, we
introduce alpha-Flow, a broad family of objectives that unifies trajectory
flow matching, Shortcut Model, and MeanFlow under one formulation. By adopting
a curriculum strategy that smoothly anneals from trajectory flow matching to
MeanFlow, alpha-Flow disentangles the conflicting objectives, and achieves
better convergence. When trained from scratch on class-conditional ImageNet-1K
256x256 with vanilla DiT backbones, alpha-Flow consistently outperforms
MeanFlow across scales and settings. Our largest alpha-Flow-XL/2+ model
achieves new state-of-the-art results using vanilla DiT backbones, with FID
scores of 2.58 (1-NFE) and 2.15 (2-NFE).