流式策略蒸馏:面向流匹配模型的在线策略精炼方法
Flow-OPD: On-Policy Distillation for Flow Matching Models
May 8, 2026
作者: Zhen Fang, Wenxuan Huang, Yu Zeng, Yiming Zhao, Shuang Chen, Kaituo Feng, Yunlong Lin, Lin Chen, Zehui Chen, Shaosheng Cao, Feng Zhao
cs.AI
摘要
现有流匹配(FM)文生图模型在多任务对齐中存在两大瓶颈:标量奖励导致的奖励稀疏性,以及联合优化异质目标引发的梯度干扰,二者共同造成指标间的"跷跷板效应"和普遍存在的奖励破解现象。受大语言模型领域在线策略蒸馏(OPD)成功的启发,我们提出Flow-OPD——首个将在线策略蒸馏融入流匹配模型的统一后训练框架。该框架采用两阶段对齐策略:首先通过单奖励GRPO微调培育领域专精教师模型,使每个专家模型独立达到性能上限;随后基于流式冷启动方案建立稳健初始策略,并通过在线策略采样、任务路由标注和密集轨迹级监督的三步协同,将异质专业知识无缝整合至单一学生模型。我们进一步提出流形锚定正则化(MAR),利用任务无关教师模型提供全数据监督,将生成结果锚定在高质量流形上,有效缓解纯强化学习对齐中常见的美学质量退化问题。基于Stable Diffusion 3.5 Medium构建的Flow-OPD将GenEval分数从63提升至92,OCR准确率从59提升至94,相较原始GRPO实现约10分的综合提升,同时保持图像保真度与人类偏好对齐,并展现出超越教师模型的涌现特性。这些成果确立了Flow-OPD作为构建通用文生图模型的可扩展对齐范式。
English
Existing Flow Matching (FM) text-to-image models suffer from two critical bottlenecks under multi-task alignment: the reward sparsity induced by scalar-valued rewards, and the gradient interference arising from jointly optimizing heterogeneous objectives, which together give rise to a 'seesaw effect' of competing metrics and pervasive reward hacking. Inspired by the success of On-Policy Distillation (OPD) in the large language model community, we propose Flow-OPD, the first unified post-training framework that integrates on-policy distillation into Flow Matching models. Flow-OPD adopts a two-stage alignment strategy: it first cultivates domain-specialized teacher models via single-reward GRPO fine-tuning, allowing each expert to reach its performance ceiling in isolation; it then establishes a robust initial policy through a Flow-based Cold-Start scheme and seamlessly consolidates heterogeneous expertise into a single student via a three-step orchestration of on-policy sampling, task-routing labeling, and dense trajectory-level supervision. We further introduce Manifold Anchor Regularization (MAR), which leverages a task-agnostic teacher to provide full-data supervision that anchors generation to a high-quality manifold, effectively mitigating the aesthetic degradation commonly observed in purely RL-driven alignment. Built upon Stable Diffusion 3.5 Medium, Flow-OPD raises the GenEval score from 63 to 92 and the OCR accuracy from 59 to 94, yielding an overall improvement of roughly 10 points over vanilla GRPO, while preserving image fidelity and human-preference alignment and exhibiting an emergent 'teacher-surpassing' effect. These results establish Flow-OPD as a scalable alignment paradigm for building generalist text-to-image models.