ChatPaper.aiChatPaper

Prism:一个用于解耦和评估VLM能力的框架

Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs

June 20, 2024
作者: Yuxuan Qiao, Haodong Duan, Xinyu Fang, Junming Yang, Lin Chen, Songyang Zhang, Jiaqi Wang, Dahua Lin, Kai Chen
cs.AI

摘要

视觉语言模型(VLMs)展示出在处理各种视觉问题方面的显著能力,这需要强大的感知和推理能力。独立评估这两种能力对于模型的改进至关重要,尽管由于现有VLMs中视觉和推理的交织性而存在困难。为了解决这个问题,我们提出了Prism,这是一个创新的框架,旨在将视觉问题解决中涉及的感知和推理过程解耦。Prism包括两个不同阶段:一个利用VLM提取和表达视觉信息的感知阶段,以及一个利用大型语言模型(LLM)根据提取的视觉信息制定响应的推理阶段。这种模块化设计使得可以系统地比较和评估专有和开源VLM的感知和推理能力。我们的分析框架提供了一些有价值的见解,突显了Prism作为视觉语言任务的经济有效解决方案的潜力。通过将专注于感知的简化VLM与专为推理而设计的强大LLM相结合,Prism在一般视觉语言任务中取得了卓越的结果,同时大幅减少了培训和运营成本。定量评估显示,当配置了基础的2B LLaVA和免费获取的GPT-3.5时,Prism在严格的多模态基准MMStar上的性能与规模大10倍的VLMs相当。该项目发布在:https://github.com/SparksJoe/Prism。
English
Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a Large Language Model (LLM). This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks. By combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs 10 times larger on the rigorous multimodal benchmark MMStar. The project is released at: https://github.com/SparksJoe/Prism.

Summary

AI-Generated Summary

PDF362December 2, 2024