ChatPaper.aiChatPaper

利用人类反馈微调扩散模型而无需任何奖励机制

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

November 22, 2023
作者: Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li
cs.AI

摘要

利用人类反馈进行强化学习(RLHF)在扩散模型微调方面展现出显著潜力。传统方法首先训练符合人类偏好的奖励模型,随后运用强化学习技术对基础模型进行微调。然而,构建高效的奖励模型需要大规模数据集、最优架构及人工超参数调优,导致该过程耗时且成本高昂。直接偏好优化(DPO)方法在大型语言模型微调中表现优异,无需依赖奖励模型,但扩散模型去噪过程对GPU内存的极高需求阻碍了DPO的直接应用。为解决此问题,我们提出去噪扩散策略直接偏好优化(D3PO)方法,实现对扩散模型的直接微调。理论分析表明,尽管D3PO省去了奖励模型训练环节,其实际等效于通过人类反馈数据训练出的最优奖励模型来指导学习过程。该方法无需训练奖励模型,具有更直接、经济且计算开销低的优势。实验中,本方法以目标函数的相对尺度作为人类偏好的代理指标,取得了与使用真实奖励方法相当的结果。此外,D3PO能够有效降低图像失真率并生成更安全的图像,克服了缺乏稳健奖励模型的挑战。
English
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models.
PDF285February 8, 2026