ChatPaper.aiChatPaper

Mini-Gemini:挖掘多模式視覺語言模型的潛力

Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models

March 27, 2024
作者: Yanwei Li, Yuechen Zhang, Chengyao Wang, Zhisheng Zhong, Yixin Chen, Ruihang Chu, Shaoteng Liu, Jiaya Jia
cs.AI

摘要

在這份工作中,我們介紹了Mini-Gemini,這是一個簡單而有效的框架,用於增強多模式視覺語言模型(VLMs)。儘管VLMs的進步促進了基本的視覺對話和推理,但與GPT-4和Gemini等先進模型相比,仍存在性能差距。我們試圖通過從三個方面挖掘VLMs的潛力來縮小這一差距,即高分辨率視覺標記、高質量數據和VLM引導生成的任意工作流程。為了增強視覺標記,我們提出利用額外的視覺編碼器進行高分辨率精煉,而不增加視覺標記計數。我們進一步構建了一個高質量數據集,促進精確的圖像理解和基於推理的生成,擴大了當前VLMs的操作範圍。總的來說,Mini-Gemini進一步挖掘了VLMs的潛力,同時賦予當前框架圖像理解、推理和生成的能力。Mini-Gemini支持一系列從2B到34B的密集和MoE大型語言模型(LLMs)。已證明在幾個零-shot基準測試中取得了領先的性能,甚至超越了已開發的私有模型。代碼和模型可在https://github.com/dvlab-research/MiniGemini 上找到。
English
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for better performance and any-to-any workflow from three aspects, i.e., high-resolution visual tokens, high-quality data, and VLM-guided generation. To enhance visual tokens, we propose to utilize an additional visual encoder for high-resolution refinement without increasing the visual token count. We further construct a high-quality dataset that promotes precise image comprehension and reasoning-based generation, expanding the operational scope of current VLMs. In general, Mini-Gemini further mines the potential of VLMs and empowers current frameworks with image understanding, reasoning, and generation simultaneously. Mini-Gemini supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B. It is demonstrated to achieve leading performance in several zero-shot benchmarks and even surpasses the developed private models. Code and models are available at https://github.com/dvlab-research/MiniGemini.

Summary

AI-Generated Summary

PDF484December 15, 2024