全融合技术报告
OmniFusion Technical Report
April 9, 2024
作者: Elizaveta Goncharova, Anton Razzhigaev, Matvey Mikhalchuk, Maxim Kurkin, Irina Abdullaeva, Matvey Skripkin, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov
cs.AI
摘要
去年,多模态架构在基于人工智能的方法和解决方案中掀起了一场革命,扩展了大型语言模型(LLM)的能力。我们提出了一种基于预训练LLM和用于视觉模态的适配器的OmniFusion模型。我们评估并比较了几种架构设计原则,以实现更好的文本和视觉数据耦合:MLP和Transformer适配器,各种基于CLIP ViT的编码器(如SigLIP、InternVIT等)及其融合方法,图像编码方法(整个图像或瓷砖编码)以及两个7B的LLM(专有和开源的Mistral)。在8个视觉-语言基准测试上进行的实验显示,与开源的LLaVA类似解决方案(VizWiz、Pope、MM-Vet、ScienceQA、MMBench、TextVQA、VQAv2、MMMU)相比,最佳OmniFusion设置在不同VQA任务方面获得了最高分。我们还提出了多种情境,OmniFusion在不同领域提供了高度详细的答案:家政、观光、文化、医学、手写和扫描方程式识别等。基于Mistral的OmniFusion模型是一个开源解决方案,其权重、训练和推断脚本可在https://github.com/AIRI-Institute/OmniFusion 上获取。
English
Last year, multimodal architectures served up a revolution in AI-based
approaches and solutions, extending the capabilities of large language models
(LLM). We propose an OmniFusion model based on a pretrained LLM and
adapters for visual modality. We evaluated and compared several architecture
design principles for better text and visual data coupling: MLP and transformer
adapters, various CLIP ViT-based encoders (SigLIP, InternVIT, etc.), and their
fusing approach, image encoding method (whole image or tiles encoding) and two
7B LLMs (the proprietary one and open-source Mistral). Experiments on 8
visual-language benchmarks show the top score for the best OmniFusion setup in
terms of different VQA tasks in comparison with open-source LLaVA-like
solutions: VizWiz, Pope, MM-Vet, ScienceQA, MMBench, TextVQA, VQAv2, MMMU. We
also propose a variety of situations, where OmniFusion provides highly-detailed
answers in different domains: housekeeping, sightseeing, culture, medicine,
handwritten and scanned equations recognition, etc. Mistral-based OmniFusion
model is an open-source solution with weights, training and inference scripts
available at https://github.com/AIRI-Institute/OmniFusion.Summary
AI-Generated Summary