多模态大语言模型上的视觉问题分解
Visual Question Decomposition on Multimodal Large Language Models
September 28, 2024
作者: Haowei Zhang, Jianzhe Liu, Zhen Han, Shuo Chen, Bailan He, Volker Tresp, Zhiqiang Xu, Jindong Gu
cs.AI
摘要
问题分解已被证明是一种有效的策略,用于促使大型语言模型(LLMs)回答复杂问题。然而,尽管现有方法主要集中在单模态语言模型上,但多模态大型语言模型(MLLMs)的问题分解能力尚未被探索。为此,本文探讨了MLLMs上的视觉问题分解。具体来说,我们引入了一个系统化评估框架,包括一个数据集和几个评估标准,以评估分解后子问题的质量,揭示现有MLLMs难以产生高质量的子问题。为解决这一局限性,我们提出了一个特定的微调数据集DecoVQA+,用于增强模型的问题分解能力。旨在使模型能够执行适当的选择性分解,我们提出了一个高效的微调流程。微调流程包括我们提出的数据集和一个用于选择性分解的训练目标。微调后的MLLMs在子问题质量和选择性问题分解策略方面均表现出显著改进。此外,模型在VQA基准数据集上通过选择性分解也实现了更高的准确性。
English
Question decomposition has emerged as an effective strategy for prompting
Large Language Models (LLMs) to answer complex questions. However, while
existing methods primarily focus on unimodal language models, the question
decomposition capability of Multimodal Large Language Models (MLLMs) has yet to
be explored. To this end, this paper explores visual question decomposition on
MLLMs. Specifically, we introduce a systematic evaluation framework including a
dataset and several evaluation criteria to assess the quality of the decomposed
sub-questions, revealing that existing MLLMs struggle to produce high-quality
sub-questions. To address this limitation, we propose a specific finetuning
dataset, DecoVQA+, for enhancing the model's question decomposition capability.
Aiming at enabling models to perform appropriate selective decomposition, we
propose an efficient finetuning pipeline. The finetuning pipeline consists of
our proposed dataset and a training objective for selective decomposition.
Finetuned MLLMs demonstrate significant improvements in the quality of
sub-questions and the policy of selective question decomposition. Additionally,
the models also achieve higher accuracy with selective decomposition on VQA
benchmark datasets.Summary
AI-Generated Summary