ChatPaper.aiChatPaper

T2I-R1:通过协同语义级与词元级思维链强化图像生成

T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT

May 1, 2025
作者: Dongzhi Jiang, Ziyu Guo, Renrui Zhang, Zhuofan Zong, Hao Li, Le Zhuo, Shilin Yan, Pheng-Ann Heng, Hongsheng Li
cs.AI

摘要

近期,大型语言模型的研究进展展示了思维链(CoT)与强化学习(RL)如何有效提升模型性能。然而,将此类推理策略应用于视觉生成领域仍鲜有探索。本文提出T2I-R1,一种新颖的推理增强型文本到图像生成模型,其核心在于结合了双层次CoT推理过程的强化学习。具体而言,我们识别出两种可分别优化生成不同阶段的CoT层次:(1)语义级CoT,用于提示词的高层次规划;(2)令牌级CoT,在逐块生成过程中处理低层次像素信息。为更好地协调这两层CoT,我们引入了BiCoT-GRPO,它集成了一系列生成奖励,能够在同一训练步骤中无缝优化两种生成CoT。通过将我们的推理策略应用于基线模型Janus-Pro,在T2I-CompBench上实现了13%的性能提升,在WISE基准测试中提升了19%,甚至超越了当前最先进的模型FLUX。代码已公开于:https://github.com/CaraJ7/T2I-R1。
English
Recent advancements in large language models have demonstrated how chain-of-thought (CoT) and reinforcement learning (RL) can improve performance. However, applying such reasoning strategies to the visual generation domain remains largely unexplored. In this paper, we present T2I-R1, a novel reasoning-enhanced text-to-image generation model, powered by RL with a bi-level CoT reasoning process. Specifically, we identify two levels of CoT that can be utilized to enhance different stages of generation: (1) the semantic-level CoT for high-level planning of the prompt and (2) the token-level CoT for low-level pixel processing during patch-by-patch generation. To better coordinate these two levels of CoT, we introduce BiCoT-GRPO with an ensemble of generation rewards, which seamlessly optimizes both generation CoTs within the same training step. By applying our reasoning strategies to the baseline model, Janus-Pro, we achieve superior performance with 13% improvement on T2I-CompBench and 19% improvement on the WISE benchmark, even surpassing the state-of-the-art model FLUX.1. Code is available at: https://github.com/CaraJ7/T2I-R1

Summary

AI-Generated Summary

PDF371May 4, 2025