ChatPaper.aiChatPaper

Kosmos-2:将多模态大型语言模型与世界接轨

Kosmos-2: Grounding Multimodal Large Language Models to the World

June 26, 2023
作者: Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.AI

摘要

我们介绍了Kosmos-2,一个多模态大型语言模型(MLLM),使其具备感知物体描述(如边界框)并将文本与视觉世界联系起来的新能力。具体而言,我们将指代表达式表示为Markdown中的链接,即``[文本范围](边界框)'',其中物体描述是位置标记的序列。结合多模态语料库,我们构建了大规模的图像文本配对数据(称为GrIT)来训练模型。除了MLLM的现有功能(如感知一般模态、遵循指令和执行上下文学习)之外,Kosmos-2还将接地能力整合到下游应用程序中。我们在广泛的任务上评估了Kosmos-2,包括(i)多模态接地,如指代表达理解和短语接地,(ii)多模态引用,如指代表达生成,(iii)感知语言任务,以及(iv)语言理解和生成。这项工作为“具象智能”(Embodiment AI)的发展奠定了基础,并揭示了语言、多模态感知、行动和世界建模的重大融合,这是通往人工通用智能的关键一步。数据、演示和预训练模型可在https://aka.ms/kosmos-2获取。
English
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Data, demo, and pretrained models are available at https://aka.ms/kosmos-2.
PDF349December 15, 2024