ChatPaper.aiChatPaper

使用强化学习训练扩散模型

Training Diffusion Models with Reinforcement Learning

May 22, 2023
作者: Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine
cs.AI

摘要

扩散模型是一类灵活的生成模型,通过对对数似然目标的近似训练。然而,大多数扩散模型的用例并不关注似然,而是关注人类感知图像质量或药物有效性等下游目标。本文研究了强化学习方法,用于直接优化扩散模型以实现这些目标。我们描述了将去噪视为多步决策问题如何启用一类策略梯度算法,我们称之为去噪扩散策略优化(DDPO),相较于替代的奖励加权似然方法更为有效。经验上,DDPO 能够调整文本到图像扩散模型以适应难以通过提示表达的目标,比如图像可压缩性,以及源自人类反馈的目标,比如美学质量。最后,我们展示了 DDPO 能够通过从视觉-语言模型的反馈改进提示-图像对齐,而无需额外的数据收集或人工标注。
English
Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization (DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO is able to adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation.
PDF41December 15, 2024