ReasonGen-R1:通过监督微调(SFT)与强化学习(RL)实现自回归图像生成模型的思维链(CoT)
ReasonGen-R1: CoT for Autoregressive Image generation models through SFT and RL
May 30, 2025
作者: Yu Zhang, Yunqi Li, Yifan Yang, Rui Wang, Yuqing Yang, Dai Qi, Jianmin Bao, Dongdong Chen, Chong Luo, Lili Qiu
cs.AI
摘要
尽管思维链推理和强化学习(RL)推动了自然语言处理领域的突破,但它们在生成式视觉模型中的整合仍未被充分探索。我们提出了ReasonGen-R1,一个两阶段框架:首先,通过在全新生成的理由数据集上进行监督微调,赋予自回归图像生成器基于文本的显式“思考”能力;随后,利用群体相对策略优化(Group Relative Policy Optimization)精炼其输出。为了让模型在生成图像前能通过文本进行推理,我们自动生成并发布了一个由模型构建的理由语料库,这些理由与视觉提示配对,从而实现对物体布局、风格和场景构图的受控规划。我们的GRPO算法利用预训练的视觉语言模型提供的奖励信号来评估整体视觉质量,并在每次更新中优化策略。在GenEval、DPG和T2I基准测试上的评估表明,ReasonGen-R1始终优于强大的基线模型和先前的先进模型。更多信息请访问:aka.ms/reasongen。
English
Although chain-of-thought reasoning and reinforcement learning (RL) have
driven breakthroughs in NLP, their integration into generative vision models
remains underexplored. We introduce ReasonGen-R1, a two-stage framework that
first imbues an autoregressive image generator with explicit text-based
"thinking" skills via supervised fine-tuning on a newly generated reasoning
dataset of written rationales, and then refines its outputs using Group
Relative Policy Optimization. To enable the model to reason through text before
generating images, We automatically generate and release a corpus of model
crafted rationales paired with visual prompts, enabling controlled planning of
object layouts, styles, and scene compositions. Our GRPO algorithm uses reward
signals from a pretrained vision language model to assess overall visual
quality, optimizing the policy in each update. Evaluations on GenEval, DPG, and
the T2I benchmark demonstrate that ReasonGen-R1 consistently outperforms strong
baselines and prior state-of-the-art models. More: aka.ms/reasongen.Summary
AI-Generated Summary