ChatPaper.aiChatPaper

让ViT发声:生成式语言-图像预训练

Let ViT Speak: Generative Language-Image Pre-training

May 1, 2026
作者: Yan Fang, Mengcheng Lan, Zilong Huang, Weixian Lei, Yunqing Zhao, Yujie Zhong, Yingchen Yu, Qi She, Yao Zhao, Yunchao Wei
cs.AI

摘要

本文提出生成式语言-图像预训练(GenLIP)——一种面向多模态大语言模型(MLLM)的极简生成式视觉Transformer(ViT)预训练框架。为使视觉编码器更好地适配大语言模型的自回归特性,GenLIP采用标准语言建模目标直接训练ViT根据视觉标记预测语言标记,无需对比批次构建或额外文本解码器。该设计具有三大优势:(1)简洁性:单一Transformer联合建模视觉与文本标记;(2)可扩展性:在数据和模型规模上均具备高效扩展能力;(3)性能表现:在多模态基准测试中达到竞争性或更优结果。基于Recap-DataComp-1B中80亿样本训练后,GenLIP在使用显著更少预训练数据的情况下仍能媲美或超越强基线模型。经过原生宽高比多分辨率图像的持续预训练,GenLIP在OCR和图表理解等细节敏感任务上表现进一步提升,为多模态大语言模型的视觉编码器奠定了坚实基础。
English
In this paper, we present Generative Language-Image Pre-training (GenLIP), a minimalist generative pretraining framework for Vision Transformers (ViTs) designed for multimodal large language models (MLLMs). To better align vision encoders with the autoregressive nature of LLMs, GenLIP trains a ViT to predict language tokens directly from visual tokens using a standard language modeling objective, without contrastive batch construction or an additional text decoder. This design offers three key advantages: (1) Simplicity: a single transformer jointly models visual and textual tokens; (2) Scalability: it scales effectively with both data and model size; and (3) Performance: it achieves competitive or superior results across diverse multimodal benchmarks. Trained on 8B samples from Recap-DataComp-1B, GenLIP matches or surpasses strong baselines despite using substantially less pretraining data. After continued pretraining on multi-resolution images at native aspect ratios, GenLIP further improves on detail-sensitive tasks such as OCR and chart understanding, making it a strong foundation for vision encoders in MLLMs.
PDF91May 5, 2026