ScalSelect:面向高效视觉指令调优的可扩展免训练多模态数据选择方法
ScalSelect: Scalable Training-Free Multimodal Data Selection for Efficient Visual Instruction Tuning
February 12, 2026
作者: Changti Wu, Jiahuai Mao, Yuzhuo Miao, Shijie Lian, Bin Yu, Xiaopeng Lin, Cong Huang, Lei Zhang, Kai Chen
cs.AI
摘要
大规模视觉指令调优(VIT)已成为提升视觉语言模型(VLM)在多模态任务中性能的关键范式。然而,由于数据冗余,在大规模数据集上进行训练计算成本高昂且效率低下,这推动了对多模态数据选择以提升训练效率的需求。现有的VIT数据选择方法要么需要昂贵的训练或梯度计算,而免训练方案往往依赖代理模型/数据集、与指令无关的表征或具有二次方复杂度的成对相似性计算,限制了可扩展性和表征保真度。本文提出ScalSelect——一种可扩展的免训练多模态数据选择方法,其时间复杂度与样本数量呈线性关系,且无需外部模型或辅助数据集。ScalSelect首先通过提取目标VLM中指令令牌最关注的视觉特征来构建样本表征,从而捕获指令相关信息;随后识别其表征最能逼近全量数据集表征主导子空间的样本,实现无需成对比较的可扩展重要性评分。在多个VLM、数据集和选择预算下的实验表明,ScalSelect仅使用16%的数据即可达到全量数据训练97.5%以上的性能,在某些设定下甚至优于全量数据训练。代码已开源于https://github.com/ChangtiWu/ScalSelect。
English
Large-scale Visual Instruction Tuning (VIT) has become a key paradigm for advancing the performance of vision-language models (VLMs) across various multimodal tasks. However, training on the large-scale datasets is computationally expensive and inefficient due to redundancy in the data, which motivates the need for multimodal data selection to improve training efficiency. Existing data selection methods for VIT either require costly training or gradient computation. Training-free alternatives often depend on proxy models or datasets, instruction-agnostic representations, and pairwise similarity with quadratic complexity, limiting scalability and representation fidelity. In this work, we propose ScalSelect, a scalable training-free multimodal data selection method with linear-time complexity with respect to the number of samples, eliminating the need for external models or auxiliary datasets. ScalSelect first constructs sample representations by extracting visual features most attended by instruction tokens in the target VLM, capturing instruction-relevant information. It then identifies samples whose representations best approximate the dominant subspace of the full dataset representations, enabling scalable importance scoring without pairwise comparisons. Extensive experiments across multiple VLMs, datasets, and selection budgets demonstrate that ScalSelect achieves over 97.5% of the performance of training on the full dataset using only 16% of the data, and even outperforms full-data training in some settings. The code is available at https://github.com/ChangtiWu/ScalSelect{ScalSelect}.