小型语言模型遇见强化视觉词汇
Small Language Model Meets with Reinforced Vision Vocabulary
January 23, 2024
作者: Haoran Wei, Lingyu Kong, Jinyue Chen, Liang Zhao, Zheng Ge, En Yu, Jianjian Sun, Chunrui Han, Xiangyu Zhang
cs.AI
摘要
在2023年,AI社区中流行着玩大型视觉语言模型(LVLMs)。然而,流行的LVLMs具有相对较多的参数(超过7B),这使得在消费级GPU上训练和部署变得困难,让许多资源有限的研究人员望而却步。想象一下,在一张老旧的GTX1080ti(我们唯一的显卡)上体验当前LVLMs的所有功能会是多么酷。因此,我们在本报告中介绍了Vary-toy,这是一个小型Vary,以Qwen-1.8B作为基础的“大”语言模型。在Vary-toy中,我们引入了一个改进的视觉词汇表,使得该模型不仅具备Vary的所有特征,还能获得更多的通用性。具体来说,在生成视觉词汇表的过程中,我们用目标检测驱动的正样本数据替换自然图像的负样本,更充分地利用了词汇网络的容量,使其能够高效地编码与自然对象相对应的视觉信息。在实验中,Vary-toy在DocVQA上可以达到65.6%的ANLS,ChartQA上的准确率为59.1%,RefCOCO上的准确率为88.1%,MMVet上为29%。代码将在主页上公开提供。
English
Playing Large Vision Language Models (LVLMs) in 2023 is trendy among the AI
community. However, the relatively large number of parameters (more than 7B) of
popular LVLMs makes it difficult to train and deploy on consumer GPUs,
discouraging many researchers with limited resources. Imagine how cool it would
be to experience all the features of current LVLMs on an old GTX1080ti (our
only game card). Accordingly, we present Vary-toy in this report, a small-size
Vary along with Qwen-1.8B as the base ``large'' language model. In Vary-toy, we
introduce an improved vision vocabulary, allowing the model to not only possess
all features of Vary but also gather more generality. Specifically, we replace
negative samples of natural images with positive sample data driven by object
detection in the procedure of generating vision vocabulary, more sufficiently
utilizing the capacity of the vocabulary network and enabling it to efficiently
encode visual information corresponding to natural objects. For experiments,
Vary-toy can achieve 65.6% ANLS on DocVQA, 59.1% accuracy on ChartQA, 88.1%
accuracy on RefCOCO, and 29% on MMVet. The code will be publicly available on
the homepage.