ChatPaper.aiChatPaper

迈向下一代基础多模态大语言模型的自我提升系统性认知

Towards Self-Improving Systematic Cognition for Next-Generation Foundation MLLMs

March 16, 2025
作者: Xiaoying Zhang, Da Peng, Yipeng Zhang, Zonghao Guo, Chengyue Wu, Chi Chen, Wei Ke, Helen Meng, Maosong Sun
cs.AI

摘要

尽管多模态大语言模型(MLLMs)展现出卓越的能力,但在细粒度感知和复杂推理方面仍面临挑战。当前主流的预训练方法侧重于通过高质量图像描述来增强感知能力,这是因为收集用于提升推理能力的思维链(CoT)数据成本极高。虽然利用先进的MLLMs生成描述提高了可扩展性,但其输出往往缺乏全面性和准确性。本文提出自我提升认知框架(SIcog),旨在通过自生成数据的多模态预训练,增强MLLMs的系统认知能力,构建下一代基础MLLMs。具体而言,我们引入了描述链方法,通过逐步视觉理解提升MLLMs的系统感知能力,确保更高的全面性和准确性。此外,采用结构化CoT推理技术,使MLLMs能够进行深入的多模态推理。为构建具备自我提升认知能力的下一代基础MLLM,SIcog首先利用最少的外部标注赋予MLLM系统感知与推理能力。增强后的模型随后生成详细描述和CoT推理数据,并通过自一致性进一步筛选。最终,这些精选数据用于多模态预训练,开发下一代基础模型。在多种基准测试中,对低分辨率和高分辨率MLLMs的广泛实验表明,仅使用213K自生成的预训练样本,SIcog便能产出认知能力显著提升的下一代基础MLLMs,相较于主流预训练方法,实现了基准测试中的领先性能。
English
Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) face challenges with fine-grained perception and complex reasoning. Prevalent multimodal pre-training approaches focus on enhancing perception by training on high-quality image captions due to the extremely high cost of collecting chain-of-thought (CoT) reasoning data for improving reasoning. While leveraging advanced MLLMs for caption generation enhances scalability, the outputs often lack comprehensiveness and accuracy. In this paper, we introduce Self-Improving cognition (SIcog), a self-learning framework designed to construct next-generation foundation MLLMs by enhancing their systematic cognitive capabilities through multimodal pre-training with self-generated data. Specifically, we propose Chain-of-Description, an approach that improves an MLLM's systematic perception by enabling step-by-step visual understanding, ensuring greater comprehensiveness and accuracy. Additionally, we adopt a structured CoT reasoning technique to enable MLLMs to integrate in-depth multimodal reasoning. To construct a next-generation foundation MLLM with self-improved cognition, SIcog first equips an MLLM with systematic perception and reasoning abilities using minimal external annotations. The enhanced models then generate detailed captions and CoT reasoning data, which are further curated through self-consistency. This curated data is ultimately used for multimodal pre-training to develop next-generation foundation models. Extensive experiments on both low- and high-resolution MLLMs across diverse benchmarks demonstrate that, with merely 213K self-generated pre-training samples, SIcog produces next-generation foundation MLLMs with significantly improved cognition, achieving benchmark-leading performance compared to prevalent pre-training approaches.

Summary

AI-Generated Summary

PDF73March 19, 2025