ChatPaper.aiChatPaper

從頭到尾:通過自適應數據校準實現大型視覺語言模型中的平衡表徵

From Head to Tail: Towards Balanced Representation in Large Vision-Language Models through Adaptive Data Calibration

March 17, 2025
作者: Mingyang Song, Xiaoye Qu, Jiawei Zhou, Yu Cheng
cs.AI

摘要

大型视觉语言模型(LVLMs)在结合视觉理解与语言生成方面取得了显著进展。尽管取得了这些成功,LVLMs的训练数据仍然面临长尾(LT)问题,即数据分布高度不平衡。以往的研究主要集中在传统的VLM架构,如CLIP或ViT,以及特定任务,如识别和分类。然而,对于LVLM(例如LLaVA)和更广泛任务(例如视觉问答和视觉推理)的探索仍显不足。本文首先深入分析了LVLMs中的长尾问题,并识别出两个核心原因:头部概念的过度代表和尾部概念的不足代表。基于上述观察,我们提出了一个自适应数据精炼框架(ADR),该框架包含两个阶段:数据再平衡(DR)和数据合成(DS)。在DR阶段,我们根据实体分布自适应地重新平衡冗余数据,而在DS阶段,我们利用去噪扩散概率模型(DDPMs)和稀缺图像来补充不足代表的部分。通过在十一个基准上的全面评估,我们提出的ADR有效缓解了训练数据中的长尾问题,将LLaVA 1.5的平均性能相对提高了4.36%,且未增加训练数据量。
English
Large Vision-Language Models (LVLMs) have achieved significant progress in combining visual comprehension with language generation. Despite this success, the training data of LVLMs still suffers from Long-Tail (LT) problems, where the data distribution is highly imbalanced. Previous works have mainly focused on traditional VLM architectures, i.e., CLIP or ViT, and specific tasks such as recognition and classification. Nevertheless, the exploration of LVLM (e.g. LLaVA) and more general tasks (e.g. Visual Question Answering and Visual Reasoning) remains under-explored. In this paper, we first conduct an in-depth analysis of the LT issues in LVLMs and identify two core causes: the overrepresentation of head concepts and the underrepresentation of tail concepts. Based on the above observation, we propose an Adaptive Data Refinement Framework (ADR), which consists of two stages: Data Rebalancing (DR) and Data Synthesis (DS). In the DR stage, we adaptively rebalance the redundant data based on entity distributions, while in the DS stage, we leverage Denoising Diffusion Probabilistic Models (DDPMs) and scarce images to supplement underrepresented portions. Through comprehensive evaluations across eleven benchmarks, our proposed ADR effectively mitigates the long-tail problem in the training data, improving the average performance of LLaVA 1.5 relatively by 4.36%, without increasing the training data volume.

Summary

AI-Generated Summary

PDF92March 24, 2025