ChatPaper.aiChatPaper

VCoder:用于多模态大型语言模型的多功能视觉编码器

VCoder: Versatile Vision Encoders for Multimodal Large Language Models

December 21, 2023
作者: Jitesh Jain, Jianwei Yang, Humphrey Shi
cs.AI

摘要

人类拥有视觉感知这一显著技能,即看到并理解所见之物,帮助他们理解视觉世界,进而推理。最近,多模态大型语言模型(MLLM)在视觉-语言任务上取得了令人瞩目的表现,涵盖了从视觉问答和图像描述到视觉推理和图像生成等任务。然而,当要求识别或计数(感知)给定图像中的实体时,现有的MLLM系统会失败。为了开发一个准确的MLLM系统,用于感知和推理,我们建议使用多功能视觉编码器(VCoder)作为多模态LLM的感知“眼睛”。我们通过将VCoder与分割或深度图等感知模态相结合,提高MLLM的感知能力。其次,我们利用来自COCO数据集的图像和现成的视觉感知模型的输出,创建了用于训练和评估MLLM在对象感知任务上的COCO分割文本(COST)数据集。第三,我们引入了评估MLLM在我们的COST数据集上对象感知能力的度量标准。最后,我们提供了大量实验证据,证明了VCoder相对于现有的多模态LLM(包括GPT-4V)在对象级别感知技能上的改进。我们开源了我们的数据集、代码和模型以促进研究。我们的代码开源于https://github.com/SHI-Labs/VCoder
English
Humans possess the remarkable skill of Visual Perception, the ability to see and understand the seen, helping them make sense of the visual world and, in turn, reason. Multimodal Large Language Models (MLLM) have recently achieved impressive performance on vision-language tasks ranging from visual question-answering and image captioning to visual reasoning and image generation. However, when prompted to identify or count (perceive) the entities in a given image, existing MLLM systems fail. Working towards developing an accurate MLLM system for perception and reasoning, we propose using Versatile vision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the VCoder with perception modalities such as segmentation or depth maps, improving the MLLM's perception abilities. Secondly, we leverage the images from COCO and outputs from off-the-shelf vision perception models to create our COCO Segmentation Text (COST) dataset for training and evaluating MLLMs on the object perception task. Thirdly, we introduce metrics to assess the object perception abilities in MLLMs on our COST dataset. Lastly, we provide extensive experimental evidence proving the VCoder's improved object-level perception skills over existing Multimodal LLMs, including GPT-4V. We open-source our dataset, code, and models to promote research. We open-source our code at https://github.com/SHI-Labs/VCoder
PDF171December 15, 2024