ChatPaper.aiChatPaper

分步采样,分块优化:面向文本到图像生成的块级GRPO方法

Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation

October 24, 2025
作者: Yifu Luo, Penghui Du, Bo Li, Sinan Du, Tiantian Zhang, Yongzhe Chang, Kai Wu, Kun Gai, Xueqian Wang
cs.AI

摘要

群体相对策略优化(GRPO)在基于流匹配的文本到图像生成中展现出强大潜力,但仍面临两个关键局限:优势归因不准确,以及忽略生成的时间动态性。本研究提出将优化范式从步进层级转向块级可有效缓解这些问题。基于此思路,我们提出Chunk-GRPO——首个基于块级GRPO的文本到图像生成方法。其核心思想是将连续步骤分组为捕捉流匹配内在时间动态的连贯"块",并在块级进行策略优化。此外,我们引入可选的加权采样策略以进一步提升性能。大量实验表明,Chunk-GRPO在偏好对齐和图像质量方面均取得更优结果,彰显了块级优化对GRPO类方法的应用前景。
English
Group Relative Policy Optimization (GRPO) has shown strong potential for flow-matching-based text-to-image (T2I) generation, but it faces two key limitations: inaccurate advantage attribution, and the neglect of temporal dynamics of generation. In this work, we argue that shifting the optimization paradigm from the step level to the chunk level can effectively alleviate these issues. Building on this idea, we propose Chunk-GRPO, the first chunk-level GRPO-based approach for T2I generation. The insight is to group consecutive steps into coherent 'chunk's that capture the intrinsic temporal dynamics of flow matching, and to optimize policies at the chunk level. In addition, we introduce an optional weighted sampling strategy to further enhance performance. Extensive experiments show that ChunkGRPO achieves superior results in both preference alignment and image quality, highlighting the promise of chunk-level optimization for GRPO-based methods.
PDF303December 17, 2025