Kosmos-2:將多模態大型語言模型接地於世界
Kosmos-2: Grounding Multimodal Large Language Models to the World
June 26, 2023
作者: Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.AI
摘要
我們介紹了Kosmos-2,一個多模式大型語言模型(MLLM),使其能夠感知物件描述(例如,邊界框)並將文本與視覺世界相關聯的新能力。具體來說,我們將指代表達式表示為Markdown中的連結,即``[文本範圍](邊界框)'',其中物件描述是位置標記的序列。我們與多模式語料庫一起,構建了大規模的基於圖像-文本配對的數據(稱為GrIT)來訓練模型。除了MLLM的現有功能(例如,感知一般模態、遵循指示和執行上下文學習)之外,Kosmos-2將基於地面化能力整合到下游應用中。我們在廣泛的任務上評估了Kosmos-2,包括(i)多模式基礎,如指代表達理解和短語基礎,(ii)多模式指代,如指代表達生成,(iii)感知-語言任務,以及(iv)語言理解和生成。這項工作為具有具體表現的人工智能的發展奠定了基礎,並為語言、多模式感知、行動和世界建模的大融合提供了啟示,這是通往人工通用智能的關鍵一步。數據、演示和預訓練模型可在https://aka.ms/kosmos-2 上獲得。
English
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Data, demo, and pretrained models are available at
https://aka.ms/kosmos-2.