ChatPaper.aiChatPaper

VILA^2:增强型VILA

VILA^2: VILA Augmented VILA

July 24, 2024
作者: Yunhao Fang, Ligeng Zhu, Yao Lu, Yan Wang, Pavlo Molchanov, Jang Hyun Cho, Marco Pavone, Song Han, Hongxu Yin
cs.AI

摘要

视觉语言模型(VLMs)已经迅速发展,这得益于大型语言模型(LLMs)的成功。虽然模型架构和训练基础设施迅速发展,但数据筛选仍未被充分探讨。当数据数量和质量成为瓶颈时,现有工作要么直接从互联网上获取更多原始数据,但这些数据的质量无法保证,要么从黑盒商业模型(例如GPT-4V / Gemini)中提取信息,导致性能受到该模型的上限限制。在这项工作中,我们引入了一种新方法,包括自我增强步骤和专家增强步骤,以迭代改善数据质量和模型性能。在自我增强步骤中,VLM重新为其自身的预训练数据加上标题以增强数据质量,然后从头开始使用这个经过精炼的数据集进行重新训练以提高模型性能。这个过程可以迭代多轮。一旦自我增强达到饱和状态,我们使用几个从经过自我增强的VLM中微调的专家VLM,具有特定领域的专业知识,通过面向任务的加标题和重新训练,进一步将专业知识融入通用VLM中。通过结合自我增强和专家增强的训练,我们引入了VILA^2(VILA增强-VILA),这是一个VLM系列,相对于先前技术在各种任务上持续提高准确性,并在MMMU排行榜上取得了新的最先进结果,超过了开源模型。
English
Visual language models (VLMs) have rapidly progressed, driven by the success of large language models (LLMs). While model architectures and training infrastructures advance rapidly, data curation remains under-explored. When data quantity and quality become a bottleneck, existing work either directly crawls more raw data from the Internet that does not have a guarantee of data quality or distills from black-box commercial models (e.g., GPT-4V / Gemini) causing the performance upper bounded by that model. In this work, we introduce a novel approach that includes a self-augment step and a specialist-augment step to iteratively improve data quality and model performance. In the self-augment step, a VLM recaptions its own pretraining data to enhance data quality, and then retrains from scratch using this refined dataset to improve model performance. This process can iterate for several rounds. Once self-augmentation saturates, we employ several specialist VLMs finetuned from the self-augmented VLM with domain-specific expertise, to further infuse specialist knowledge into the generalist VLM through task-oriented recaptioning and retraining. With the combined self-augmented and specialist-augmented training, we introduce VILA^2 (VILA-augmented-VILA), a VLM family that consistently improves the accuracy on a wide range of tasks over prior art, and achieves new state-of-the-art results on MMMU leaderboard among open-sourced models.

Summary

AI-Generated Summary

PDF427November 28, 2024