利用人类反馈来微调扩散模型,无需任何奖励。
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
November 22, 2023
作者: Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li
cs.AI
摘要
利用强化学习与人类反馈(RLHF)在微调扩散模型方面显示出显著的潜力。先前的方法是通过训练与人类偏好一致的奖励模型,然后利用强化学习技术微调基础模型。然而,设计高效的奖励模型需要大量数据集、最佳架构和手动超参数调整,使得这一过程既耗时又昂贵。直接偏好优化(DPO)方法,在微调大型语言模型方面效果显著,消除了奖励模型的必要性。然而,扩散模型去噪过程对大量GPU内存的需求阻碍了DPO方法的直接应用。为解决这一问题,我们提出了直接偏好去噪扩散策略优化(D3PO)方法,以直接微调扩散模型。理论分析表明,尽管D3PO省略了训练奖励模型的步骤,但它实际上作为使用人类反馈数据训练的最佳奖励模型,引导学习过程。这种方法无需训练奖励模型,被证明更为直接、具有成本效益,并最小化计算开销。在实验中,我们的方法使用目标的相对规模作为人类偏好的代理,提供了与使用真实奖励的方法相媲美的结果。此外,D3PO展示了降低图像失真率和生成更安全图像的能力,克服了缺乏稳健奖励模型的挑战。
English
Using reinforcement learning with human feedback (RLHF) has shown significant
promise in fine-tuning diffusion models. Previous methods start by training a
reward model that aligns with human preferences, then leverage RL techniques to
fine-tune the underlying models. However, crafting an efficient reward model
demands extensive datasets, optimal architecture, and manual hyperparameter
tuning, making the process both time and cost-intensive. The direct preference
optimization (DPO) method, effective in fine-tuning large language models,
eliminates the necessity for a reward model. However, the extensive GPU memory
requirement of the diffusion model's denoising process hinders the direct
application of the DPO method. To address this issue, we introduce the Direct
Preference for Denoising Diffusion Policy Optimization (D3PO) method to
directly fine-tune diffusion models. The theoretical analysis demonstrates that
although D3PO omits training a reward model, it effectively functions as the
optimal reward model trained using human feedback data to guide the learning
process. This approach requires no training of a reward model, proving to be
more direct, cost-effective, and minimizing computational overhead. In
experiments, our method uses the relative scale of objectives as a proxy for
human preference, delivering comparable results to methods using ground-truth
rewards. Moreover, D3PO demonstrates the ability to reduce image distortion
rates and generate safer images, overcoming challenges lacking robust reward
models.