ChatPaper.aiChatPaper

强化学习在自回归图像编辑中的潜力

The Promise of RL for Autoregressive Image Editing

August 1, 2025
作者: Saba Ahmadi, Rabiul Awal, Ankur Sikarwar, Amirhossein Kazemnejad, Ge Ya Luo, Juan A. Rodriguez, Sai Rajeswar, Siva Reddy, Christopher Pal, Benno Krojer, Aishwarya Agrawal
cs.AI

摘要

我们探索了三种策略以提升在广泛图像编辑任务上的性能:监督微调(SFT)、强化学习(RL)以及思维链(CoT)推理。为了在一个统一的框架内研究所有这些组件,我们采用了一种自回归多模态模型,该模型以统一的方式处理文本和视觉标记。我们发现,结合大型多模态LLM验证器的强化学习是这些策略中最有效的。因此,我们发布了EARL:基于自回归与强化学习的图像编辑模型,尽管使用了更少的训练数据,EARL在多样化的编辑任务上相较于强基线模型表现出了竞争力。由此,EARL推动了自回归多模态模型在图像编辑领域的前沿发展。我们在https://github.com/mair-lab/EARL上公开了代码、训练数据及训练好的模型。
English
We explore three strategies to enhance performance on a wide range of image editing tasks: supervised fine-tuning (SFT), reinforcement learning (RL), and Chain-of-Thought (CoT) reasoning. In order to study all these components in one consistent framework, we adopt an autoregressive multimodal model that processes textual and visual tokens in a unified manner. We find RL combined with a large multi-modal LLM verifier to be the most effective of these strategies. As a result, we release EARL: Editing with Autoregression and RL, a strong RL-based image editing model that performs competitively on a diverse range of edits compared to strong baselines, despite using much less training data. Thus, EARL pushes the frontier of autoregressive multimodal models on image editing. We release our code, training data, and trained models at https://github.com/mair-lab/EARL.
PDF93August 6, 2025