InterFeedback:透過人類回饋揭示大型多模態模型的互動智能
InterFeedback: Unveiling Interactive Intelligence of Large Multimodal Models via Human Feedback
February 20, 2025
作者: Henry Hengyuan Zhao, Wenqi Pei, Yifei Tao, Haiyang Mei, Mike Zheng Shou
cs.AI
摘要
現有的基準測試並未針對大型多模態模型(LMMs)與人類用戶的互動智能進行評估,而這對於開發通用型AI助手至關重要。我們設計了InterFeedback,這是一個互動框架,可應用於任何LMM和數據集,以自主評估此能力。在此基礎上,我們引入了InterFeedback-Bench,它使用兩個代表性數據集MMMU-Pro和MathVerse來測試10種不同的開源LMM,以評估其互動智能。此外,我們還提出了InterFeedback-Human,這是一個新收集的包含120個案例的數據集,專門用於手動測試領先模型(如OpenAI-o1和Claude-3.5-Sonnet)的互動表現。我們的評估結果顯示,即使是像OpenAI-o1這樣最先進的LMM,在通過人類反饋修正其結果方面的成功率也低於50%。這些發現表明,我們需要開發能夠增強LMM解釋和利用反饋能力的方法。
English
Existing benchmarks do not test Large Multimodal Models (LMMs) on their
interactive intelligence with human users which is vital for developing
general-purpose AI assistants. We design InterFeedback, an interactive
framework, which can be applied to any LMM and dataset to assess this ability
autonomously. On top of this, we introduce InterFeedback-Bench which evaluates
interactive intelligence using two representative datasets, MMMU-Pro and
MathVerse, to test 10 different open-source LMMs. Additionally, we present
InterFeedback-Human, a newly collected dataset of 120 cases designed for
manually testing interactive performance in leading models such as OpenAI-o1
and Claude-3.5-Sonnet. Our evaluation results show that even state-of-the-art
LMM (like OpenAI-o1) can correct their results through human feedback less than
50%. Our findings point to the need for methods that can enhance the LMMs'
capability to interpret and benefit from feedback.Summary
AI-Generated Summary