ChatPaper.aiChatPaper

全方位融合技術報告

OmniFusion Technical Report

April 9, 2024
作者: Elizaveta Goncharova, Anton Razzhigaev, Matvey Mikhalchuk, Maxim Kurkin, Irina Abdullaeva, Matvey Skripkin, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov
cs.AI

摘要

去年,多模態架構在基於人工智慧的方法和解決方案中引發了一場革命,擴展了大型語言模型(LLM)的能力。我們提出了一個基於預訓練LLM和視覺模態適配器的 OmniFusion 模型。我們評估並比較了幾種架構設計原則,以實現更好的文本和視覺數據耦合:MLP和變壓器適配器、各種基於CLIP ViT的編碼器(如 SigLIP、InternVIT 等)及其融合方法、圖像編碼方法(整個圖像或瓷磚編碼)以及兩個7B的LLM(專有的和開源的 Mistral)。在8個視覺語言基準測試中進行的實驗顯示,在不同的視覺問答任務方面,最佳 OmniFusion 設置的得分優於開源的類似LLaVA的解決方案:VizWiz、Pope、MM-Vet、ScienceQA、MMBench、TextVQA、VQAv2、MMMU。我們還提出了各種情況,其中 OmniFusion 在不同領域提供了高度詳細的答案:家務、觀光、文化、醫學、手寫和掃描方程式識別等。基於 Mistral 的 OmniFusion 模型是一個開源解決方案,權重、訓練和推理腳本可在 https://github.com/AIRI-Institute/OmniFusion 上找到。
English
Last year, multimodal architectures served up a revolution in AI-based approaches and solutions, extending the capabilities of large language models (LLM). We propose an OmniFusion model based on a pretrained LLM and adapters for visual modality. We evaluated and compared several architecture design principles for better text and visual data coupling: MLP and transformer adapters, various CLIP ViT-based encoders (SigLIP, InternVIT, etc.), and their fusing approach, image encoding method (whole image or tiles encoding) and two 7B LLMs (the proprietary one and open-source Mistral). Experiments on 8 visual-language benchmarks show the top score for the best OmniFusion setup in terms of different VQA tasks in comparison with open-source LLaVA-like solutions: VizWiz, Pope, MM-Vet, ScienceQA, MMBench, TextVQA, VQAv2, MMMU. We also propose a variety of situations, where OmniFusion provides highly-detailed answers in different domains: housekeeping, sightseeing, culture, medicine, handwritten and scanned equations recognition, etc. Mistral-based OmniFusion model is an open-source solution with weights, training and inference scripts available at https://github.com/AIRI-Institute/OmniFusion.

Summary

AI-Generated Summary

PDF7810December 15, 2024