ChatPaper.aiChatPaper

mDPO:多模態大型語言模型的條件偏好優化

mDPO: Conditional Preference Optimization for Multimodal Large Language Models

June 17, 2024
作者: Fei Wang, Wenxuan Zhou, James Y. Huang, Nan Xu, Sheng Zhang, Hoifung Poon, Muhao Chen
cs.AI

摘要

直接偏好優化(DPO)已被證明是大型語言模型(LLM)對齊的有效方法。最近的研究嘗試將DPO應用於多模態情境,但發現難以實現一致的改進。通過一項比較實驗,我們確定了多模態偏好優化中的無條件偏好問題,即模型忽略了圖像條件。為了解決這個問題,我們提出了mDPO,一種多模態DPO目標,通過優化圖像偏好來防止過度優先考慮僅限於語言的偏好。此外,我們引入了一個獎勵錨點,強制獎勵對於所選應答是正面的,從而避免相對偏好優化的固有問題,即它們的可能性降低。對兩個不同大小的多模態LLM和三個廣泛使用的基準進行的實驗表明,mDPO有效地解決了多模態偏好優化中的無條件偏好問題,並顯著改善了模型性能,特別是在減少幻覺方面。
English
Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood -- an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination.

Summary

AI-Generated Summary

PDF391December 6, 2024