視覺認知引導鏈:基於階段感知強化學習的文本至圖像生成
Visual-CoG: Stage-Aware Reinforcement Learning with Chain of Guidance for Text-to-Image Generation
August 25, 2025
作者: Yaqi Li, Peng Chen, Mingyang Han, Bu Pi, Haoxiang Shi, Runzhou Zhao, Yang Yao, Xuan Zhang, Jun Song
cs.AI
摘要
尽管近期自回归模型在文本到图像(T2I)生成领域取得了令人瞩目的进展,但其处理多属性及模糊提示的能力仍显不足。针对这些局限,现有研究已采用思维链(CoT)方法以实现阶段感知的视觉合成,并运用强化学习(RL)来增强推理能力。然而,多数模型仅在生成阶段结束时提供奖励信号。这种单一、仅终点的指导方式难以识别哪些阶段对最终结果有积极贡献,可能导致策略次优化。为解决此问题,我们提出了一种视觉指导链(Visual-CoG)范式,该范式包含三个阶段:语义推理、过程精炼与结果评估,通过阶段感知的奖励在整个图像生成流程中提供即时指导。此外,我们构建了一个视觉认知基准测试集VisCog-Bench,包含四个子任务以评估语义推理的有效性。在GenEval、T2I-CompBench及所提出的VisCog-Bench上的全面评估分别显示出15%、5%和19%的提升,充分证明了Visual-CoG的优越性能。我们将尽快发布所有相关资源。
English
Despite the promising progress of recent autoregressive models in
text-to-image (T2I) generation, their ability to handle multi-attribute and
ambiguous prompts remains limited. To address these limitations, existing works
have applied chain-of-thought (CoT) to enable stage-aware visual synthesis and
employed reinforcement learning (RL) to improve reasoning capabilities.
However, most models provide reward signals only at the end of the generation
stage. This monolithic final-only guidance makes it difficult to identify which
stages contribute positively to the final outcome and may lead to suboptimal
policies. To tackle this issue, we propose a Visual-Chain of Guidance
(Visual-CoG) paradigm consisting of three stages: semantic reasoning, process
refining, and outcome evaluation, with stage-aware rewards providing immediate
guidance throughout the image generation pipeline. We further construct a
visual cognition benchmark, VisCog-Bench, which comprises four subtasks to
evaluate the effectiveness of semantic reasoning. Comprehensive evaluations on
GenEval, T2I-CompBench, and the proposed VisCog-Bench show improvements of 15%,
5%, and 19%, respectively, demonstrating the superior performance of the
proposed Visual-CoG. We will release all the resources soon.