ChatPaper.aiChatPaper

VisionLLM:大型語言模型同時也是一個針對以視覺為中心任務的開放式解碼器。

VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

May 18, 2023
作者: Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, Jifeng Dai
cs.AI

摘要

大型語言模型(LLMs)顯著加速了人工通用智能(AGI)的進展,憑藉其令人印象深刻的零-shot能力,為用戶定製任務,賦予它們在各種應用中巨大潛力。然而,在計算機視覺領域,儘管有許多功能強大的視覺基礎模型(VFMs)可用,但它們仍然被限制在預定形式的任務中,難以匹敵LLMs的開放任務能力。在這項工作中,我們提出了一個基於LLM的以視覺為中心的任務框架,稱為VisionLLM。該框架通過將圖像視為外語,將視覺中心任務與可以使用語言指令靈活定義和管理的語言任務對齊,為視覺和語言任務提供統一的視角。基於LLM的解碼器可以根據這些指令對開放任務進行適當預測。大量實驗表明,提出的VisionLLM可以通過語言指令實現不同級別的任務定製,從細粒度對象級別到粗粒度任務級別的定製,並取得良好結果。值得注意的是,憑藉通用主義LLM框架,我們的模型可以在COCO上實現超過60%的mAP,與特定於檢測的模型不相上下。我們希望這個模型可以為通用主義視覺和語言模型設定新的基準。演示將基於https://github.com/OpenGVLab/InternGPT發布。代碼將在https://github.com/OpenGVLab/VisionLLM上發布。
English
Large language models (LLMs) have notably accelerated progress towards artificial general intelligence (AGI), with their impressive zero-shot capacity for user-tailored tasks, endowing them with immense potential across a range of applications. However, in the field of computer vision, despite the availability of numerous powerful vision foundation models (VFMs), they are still restricted to tasks in a pre-defined form, struggling to match the open-ended task capabilities of LLMs. In this work, we present an LLM-based framework for vision-centric tasks, termed VisionLLM. This framework provides a unified perspective for vision and language tasks by treating images as a foreign language and aligning vision-centric tasks with language tasks that can be flexibly defined and managed using language instructions. An LLM-based decoder can then make appropriate predictions based on these instructions for open-ended tasks. Extensive experiments show that the proposed VisionLLM can achieve different levels of task customization through language instructions, from fine-grained object-level to coarse-grained task-level customization, all with good results. It's noteworthy that, with a generalist LLM-based framework, our model can achieve over 60\% mAP on COCO, on par with detection-specific models. We hope this model can set a new baseline for generalist vision and language models. The demo shall be released based on https://github.com/OpenGVLab/InternGPT. The code shall be released at https://github.com/OpenGVLab/VisionLLM.
PDF35December 15, 2024