蜜蜂:面向視覺語言推理者的數據配方
HoneyBee: Data Recipes for Vision-Language Reasoners
October 14, 2025
作者: Hritik Bansal, Devandra Singh Sachan, Kai-Wei Chang, Aditya Grover, Gargi Ghosh, Wen-tau Yih, Ramakanth Pasunuru
cs.AI
摘要
近期,视觉-语言模型(VLMs)在推理任务中展现出了卓越的性能。然而,构建高效VL推理训练数据集的基本原则仍鲜为人知。本研究引入了几种数据整理方法,并通过严格控制训练与评估设置,探究了这些方法对VL推理能力的影响。我们分析了上下文(图像与问题对)来源的影响,实施了有针对性的数据干预,并探索了图像、问题及思维链(CoT)解决方案的规模化扩展。研究发现:(a)上下文来源策略显著影响VLM性能;(b)如图像描述提供的辅助信号及纯文本推理的引入等干预措施,能带来显著提升;(c)所有数据维度(如每张图像的独特问题及每对图像-问题的独特CoT)的规模化扩展,均能持续增强推理能力。基于这些洞见,我们推出了HoneyBee,一个包含250万示例、35万对图像-问题的大规模高质量CoT推理数据集。使用HoneyBee训练的VLMs,在不同模型规模下均超越了现有最先进模型。例如,一个拥有30亿参数的HoneyBee训练VLM,在MathVerse上分别以7.8%和24.8%的优势超越了当前最优模型和基础模型。此外,我们提出了一种测试时扩展策略,在不牺牲准确性的前提下,将解码成本降低了73%。总体而言,本研究为VL推理数据集整理研究提供了改进策略。
English
Recent advances in vision-language models (VLMs) have made them highly
effective at reasoning tasks. However, the principles underlying the
construction of performant VL reasoning training datasets remain poorly
understood. In this work, we introduce several data curation approaches and
study their impacts on VL reasoning capabilities by carefully controlling
training and evaluation setups. We analyze the effects of context (image and
question pair) sources, implement targeted data interventions, and explore
scaling up images, questions, and chain-of-thought (CoT) solutions. Our
findings reveal that (a) context source strategies significantly affect VLM
performance, (b) interventions such as auxiliary signals from image captions
and the inclusion of text-only reasoning yield substantial gains, and (c)
scaling all data dimensions (e.g., unique questions per image and unique CoTs
per image-question pair) consistently improves reasoning capability. Motivated
by these insights, we introduce HoneyBee, a large-scale, high-quality CoT
reasoning dataset with 2.5M examples consisting 350K image-question pairs. VLMs
trained with HoneyBee outperform state-of-the-art models across model sizes.
For instance, a HoneyBee-trained VLM with 3B parameters outperforms the SOTA
model and the base model by 7.8% and 24.8%, respectively, on MathVerse.
Furthermore, we propose a test-time scaling strategy that reduces decoding cost
by 73% without sacrificing accuracy. Overall, this work presents improved
strategies for VL reasoning dataset curation research.