ChatPaper.aiChatPaper

Mini-Gemini:挖掘多模态视觉语言模型的潜力

Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models

March 27, 2024
作者: Yanwei Li, Yuechen Zhang, Chengyao Wang, Zhisheng Zhong, Yixin Chen, Ruihang Chu, Shaoteng Liu, Jiaya Jia
cs.AI

摘要

在这项工作中,我们介绍了Mini-Gemini,这是一个简单而有效的框架,用于增强多模态视觉语言模型(VLMs)。尽管VLMs方面取得了进展,促进了基本的视觉对话和推理,但与GPT-4和Gemini等先进模型相比仍存在性能差距。我们试图通过挖掘VLMs的潜力,从三个方面缩小这一差距,即高分辨率视觉标记、高质量数据和VLM引导生成的任意-任意工作流。为了增强视觉标记,我们提出利用额外的视觉编码器进行高分辨率细化,而不增加视觉标记数量。我们进一步构建了一个高质量数据集,促进精确的图像理解和基于推理的生成,扩大了当前VLMs的操作范围。总的来说,Mini-Gemini进一步挖掘了VLMs的潜力,并同时赋予当前框架图像理解、推理和生成的能力。Mini-Gemini支持一系列从2B到34B的密集和MoE大型语言模型(LLMs)。已经证明在几个零样本基准测试中取得了领先的性能,甚至超过了已开发的私有模型。代码和模型可在https://github.com/dvlab-research/MiniGemini获取。
English
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for better performance and any-to-any workflow from three aspects, i.e., high-resolution visual tokens, high-quality data, and VLM-guided generation. To enhance visual tokens, we propose to utilize an additional visual encoder for high-resolution refinement without increasing the visual token count. We further construct a high-quality dataset that promotes precise image comprehension and reasoning-based generation, expanding the operational scope of current VLMs. In general, Mini-Gemini further mines the potential of VLMs and empowers current frameworks with image understanding, reasoning, and generation simultaneously. Mini-Gemini supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B. It is demonstrated to achieve leading performance in several zero-shot benchmarks and even surpasses the developed private models. Code and models are available at https://github.com/dvlab-research/MiniGemini.

Summary

AI-Generated Summary

PDF484December 15, 2024