ChatPaper.aiChatPaper

從語言模型獎勵對視頻大型多模型進行直接偏好優化

Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward

April 1, 2024
作者: Ruohong Zhang, Liangke Gui, Zhiqing Sun, Yihao Feng, Keyang Xu, Yuanhan Zhang, Di Fu, Chunyuan Li, Alexander Hauptmann, Yonatan Bisk, Yiming Yang
cs.AI

摘要

偏好建模技術,如直接偏好優化(DPO),已證明在增強大型語言模型(LLM)的泛化能力方面非常有效。然而,在涉及視頻指示跟隨的任務中,特別是為了檢測生成的回應中的幻覺而提供信息反饋,仍然是一個重大挑戰。先前的研究已探索使用大型多模型模型(LMMs)作為獎勵模型來引導偏好建模,但相對應視頻的生成回應的事實性進行準確評估的能力尚未得出明確結論。本文介紹了一個新的框架,利用詳細的視頻字幕作為視頻內容的代理,使語言模型能夠將此信息作為支持證據,用於評分視頻問答(QA)預測。我們的方法展示了與OpenAI GPT-4V模型的獎勵機制的強大對齊,該機制直接將視頻幀作為輸入。此外,我們展示了通過DPO應用此定制獎勵,顯著提高了視頻LMM在視頻QA任務中的性能。
English
Preference modeling techniques, such as direct preference optimization (DPO), has shown effective in enhancing the generalization abilities of large language model (LLM). However, in tasks involving video instruction-following, providing informative feedback, especially for detecting hallucinations in generated responses, remains a significant challenge. Previous studies have explored using large large multimodal models (LMMs) as reward models to guide preference modeling, but their ability to accurately assess the factuality of generated responses compared to corresponding videos has not been conclusively established. This paper introduces a novel framework that utilizes detailed video captions as a proxy of video content, enabling language models to incorporate this information as supporting evidence for scoring video Question Answering (QA) predictions. Our approach demonstrates robust alignment with OpenAI GPT-4V model's reward mechanism, which directly takes video frames as input. Furthermore, we show that applying this tailored reward through DPO significantly improves the performance of video LMMs on video QA tasks.

Summary

AI-Generated Summary

PDF121November 26, 2024