ChatPaper.aiChatPaper

Macaw-LLM:多模式語言建模,包括圖像、音訊、視頻和文本整合

Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

June 15, 2023
作者: Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.AI

摘要

儘管調整過的大型語言模型(LLMs)在各種自然語言處理任務中展現出卓越的能力,但它們在文字以外的其他數據模態上的有效性尚未得到充分研究。在這項工作中,我們提出了Macaw-LLM,一種新穎的多模態LLM,無縫地整合了視覺、音頻和文字信息。Macaw-LLM由三個主要組件組成:用於編碼多模態數據的模態模塊、用於利用預訓練的LLMs的認知模塊,以及用於協調不同表示的對齊模塊。我們的新型對齊模塊無縫地將多模態特徵與文字特徵相連,簡化了從模態模塊到認知模塊的適應過程。此外,我們構建了一個大規模的多模態指令數據集,涉及多輪對話,包括69K個圖像實例和50K個視頻實例。我們已經公開提供了我們的數據、代碼和模型,希望這能為未來多模態LLMs的研究鋪平道路,擴展LLMs處理多樣數據模態和應對複雜現實情境的能力。
English
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
PDF154December 15, 2024