ChatPaper.aiChatPaper

从头至尾:通过自适应数据校准实现大规模视觉语言模型中的平衡表征

From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration

March 17, 2025
作者: Mingyang Song, Xiaoye Qu, Jiawei Zhou, Yu Cheng
cs.AI

摘要

大规模视觉语言模型(LVLMs)在融合视觉理解与语言生成方面取得了显著进展。然而,尽管成果斐然,LVLMs的训练数据仍面临长尾(LT)问题,即数据分布极度不均衡。以往研究多聚焦于传统视觉语言模型架构,如CLIP或ViT,以及特定任务如识别与分类。然而,对于LVLM(例如LLaVA)及更广泛任务(如视觉问答与视觉推理)的探索仍显不足。本文首先深入剖析了LVLMs中的长尾问题,并识别出两大核心成因:头部概念的过度代表与尾部概念的欠代表。基于上述观察,我们提出了一种自适应数据精炼框架(ADR),该框架包含两个阶段:数据再平衡(DR)与数据合成(DS)。在DR阶段,我们依据实体分布自适应地调整冗余数据;而在DS阶段,则利用去噪扩散概率模型(DDPMs)及稀缺图像来补充欠代表部分。通过对十一个基准的全面评估,我们提出的ADR有效缓解了训练数据中的长尾问题,在未增加训练数据量的情况下,相对提升了LLaVA 1.5的平均性能达4.36%。
English
Large Vision-Language Models (LVLMs) have achieved significant progress in combining visual comprehension with language generation. Despite this success, the training data of LVLMs still suffers from Long-Tail (LT) problems, where the data distribution is highly imbalanced. Previous works have mainly focused on traditional VLM architectures, i.e., CLIP or ViT, and specific tasks such as recognition and classification. Nevertheless, the exploration of LVLM (e.g. LLaVA) and more general tasks (e.g. Visual Question Answering and Visual Reasoning) remains under-explored. In this paper, we first conduct an in-depth analysis of the LT issues in LVLMs and identify two core causes: the overrepresentation of head concepts and the underrepresentation of tail concepts. Based on the above observation, we propose an Adaptive Data Refinement Framework (ADR), which consists of two stages: Data Rebalancing (DR) and Data Synthesis (DS). In the DR stage, we adaptively rebalance the redundant data based on entity distributions, while in the DS stage, we leverage Denoising Diffusion Probabilistic Models (DDPMs) and scarce images to supplement underrepresented portions. Through comprehensive evaluations across eleven benchmarks, our proposed ADR effectively mitigates the long-tail problem in the training data, improving the average performance of LLaVA 1.5 relatively by 4.36%, without increasing the training data volume.

Summary

AI-Generated Summary

PDF92March 24, 2025