ChatPaper.aiChatPaper

利用人類回饋微調擴散模型而無需任何獎勵模型

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

November 22, 2023
作者: Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li
cs.AI

摘要

基於人類回饋的強化學習(RLHF)在微調擴散模型方面展現出顯著潛力。現有方法通常先訓練符合人類偏好的獎勵模型,再運用強化學習技術對基礎模型進行微調。然而,構建高效的獎勵模型需要大量數據集、最佳化架構與手動超參數調校,導致過程耗時且成本高昂。雖然直接偏好優化(DPO)方法在大型語言模型微調中表現卓越,能免除獎勵模型的需求,但擴散模型去噪過程對GPU記憶體的龐大需求阻礙了DPO的直接應用。為解決此問題,我們提出去噪擴散策略直接偏好優化(D3PO)方法,實現對擴散模型的直接微調。理論分析表明,儘管D3PO省略了獎勵模型訓練,其效果等同於利用人類回饋數據訓練出的最優獎勵模型來引導學習過程。此法無需訓練獎勵模型,更具直接性與成本效益,並能大幅降低計算負載。實驗中,本方法以目標函數的相對尺度作為人類偏好的代理指標,其效果可媲美使用真實獎勵的方法。此外,D3PO能有效降低影像失真率並生成更安全的影像,克服了缺乏穩健獎勵模型的挑戰。
English
Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models.
PDF285February 8, 2026