ChatPaper.aiChatPaper

DenseFusion-1M:融合视觉专家以实现全面多模态感知

DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception

July 11, 2024
作者: Xiaotong Li, Fan Zhang, Haiwen Diao, Yueze Wang, Xinlong Wang, Ling-Yu Duan
cs.AI

摘要

现有的多模态大型语言模型(MLLMs)越来越强调对各种视觉元素的复杂理解,包括多个对象、文本信息和空间关系。它们在全面视觉感知的发展上依赖于提供多样化视觉元素和贯穿图像描述的高质量图像文本数据集。然而,目前这种超详细数据集的稀缺性阻碍了MLLM社区的进展。瓶颈源于当前字幕引擎的有限感知能力,无法提供完整准确的注释。为了促进MLLM在全面视觉感知方面的尖端研究,我们因此提出了感知融合,利用低成本但高效的字幕引擎进行完整准确的图像描述。具体来说,感知融合将多样化的感知专家作为图像先验集成,提供有关视觉元素的明确信息,并采用高效的MLLM作为中心枢纽来模仿先进MLLM的感知能力。我们从未经筛选的LAION数据集中精选了100万张高度代表性的图像,并使用我们的引擎生成了密集描述,命名为DenseFusion-1M。大量实验证实了我们的引擎优于其对手,由此产生的数据集显著提高了现有MLLM在各种视觉语言基准测试中的感知和认知能力,特别是在输入为高分辨率图像时。该数据集和代码可在https://github.com/baaivision/DenseFusion 上公开获取。
English
Existing Multimodal Large Language Models (MLLMs) increasingly emphasize complex understanding of various visual elements, including multiple objects, text information, and spatial relations. Their development for comprehensive visual perception hinges on the availability of high-quality image-text datasets that offer diverse visual elements and throughout image descriptions. However, the scarcity of such hyper-detailed datasets currently hinders progress within the MLLM community. The bottleneck stems from the limited perceptual capabilities of current caption engines, which fall short in providing complete and accurate annotations. To facilitate the cutting-edge research of MLLMs on comprehensive vision perception, we thereby propose Perceptual Fusion, using a low-budget but highly effective caption engine for complete and accurate image descriptions. Specifically, Perceptual Fusion integrates diverse perception experts as image priors to provide explicit information on visual elements and adopts an efficient MLLM as a centric pivot to mimic advanced MLLMs' perception abilities. We carefully select 1M highly representative images from uncurated LAION dataset and generate dense descriptions using our engine, dubbed DenseFusion-1M. Extensive experiments validate that our engine outperforms its counterparts, where the resulting dataset significantly improves the perception and cognition abilities of existing MLLMs across diverse vision-language benchmarks, especially with high-resolution images as inputs. The dataset and code are publicly available at https://github.com/baaivision/DenseFusion.
PDF192November 28, 2024