TEAL:用于多模态大型语言模型的Tokenize and Embed ALL
TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models
November 8, 2023
作者: Zhen Yang, Yingxue Zhang, Fandong Meng, Jie Zhou
cs.AI
摘要
尽管多模态大型语言模型(MM-LLMs)最近取得了令人振奋的进展,但它们仍然在有效建模多模态输入之间的交互以及非文本模态中的生成方面存在困难。在这项工作中,我们提出了TEAL(Tokenize and Embed All),这是一种将来自任何模态的输入视为令牌序列并学习所有模态的联合嵌入空间的方法。具体而言,对于来自任何模态的输入,TEAL首先使用现成的分词器将其离散化为令牌序列,然后使用可学习的嵌入矩阵将令牌序列嵌入到联合嵌入空间中。MM-LLMs只需像文本LLMs一样自回归地预测多模态令牌。最后,根据预测的令牌序列,应用相应的去标记器来生成每个模态中的输出。通过联合嵌入空间,TEAL使得冻结的LLMs能够执行涉及图像和音频等非文本模态的理解和生成任务。因此,文本LLM只需作为一个接口,保持其在文本理解和生成方面的高性能。实验证明,TEAL在多模态理解方面取得了显著的改进,并实现了一种简单的多模态生成方案。
English
Despite Multi-modal Large Language Models (MM-LLMs) have made exciting
strides recently, they are still struggling to efficiently model the
interactions among multi-modal inputs and the generation in non-textual
modalities. In this work, we propose TEAL (Tokenize and Embed ALl)}, an
approach to treat the input from any modality as a token sequence and learn a
joint embedding space for all modalities. Specifically, for the input from any
modality, TEAL first discretizes it into a token sequence with the
off-the-shelf tokenizer and embeds the token sequence into a joint embedding
space with a learnable embedding matrix. MM-LLMs just need to predict the
multi-modal tokens autoregressively as the textual LLMs do. Finally, the
corresponding de-tokenizer is applied to generate the output in each modality
based on the predicted token sequence. With the joint embedding space, TEAL
enables the frozen LLMs to perform both understanding and generation tasks
involving non-textual modalities, such as image and audio. Thus, the textual
LLM can just work as an interface and maintain its high performance in textual
understanding and generation. Experiments show that TEAL achieves substantial
improvements in multi-modal understanding, and implements a simple scheme for
multi-modal generations.