ChatPaper.aiChatPaper

利用人類反饋來微調擴散模型,無需任何獎勵。

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

November 22, 2023
作者: Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li
cs.AI

摘要

利用強化學習與人類反饋(RLHF)在微調擴散模型方面展現了顯著的潛力。先前的方法通常開始通過訓練一個與人類偏好一致的獎勵模型,然後利用強化學習技術來微調基礎模型。然而,製作一個高效的獎勵模型需要大量數據集、最佳架構和手動超參數調整,使得這個過程既耗時又昂貴。直接偏好優化(DPO)方法,在微調大型語言模型方面效果顯著,消除了對獎勵模型的需求。然而,擴散模型去噪過程對龐大的GPU內存需求阻礙了DPO方法的直接應用。為了解決這個問題,我們提出了直接偏好去噪擴散策略優化(D3PO)方法,以直接微調擴散模型。理論分析表明,儘管D3PO省略了訓練獎勵模型的步驟,但它實際上作為使用人類反饋數據訓練的最佳獎勵模型,引導學習過程。這種方法無需訓練獎勵模型,證明更直接、具有成本效益,並將計算開銷最小化。在實驗中,我們的方法使用目標的相對規模作為人類偏好的代理,提供了與使用真實獎勵的方法相當的結果。此外,D3PO展示了降低圖像失真率並生成更安全圖像的能力,克服了缺乏強健獎勵模型的挑戰。
English
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models.
PDF295December 15, 2024