ChatPaper.aiChatPaper

讓ViT發聲:生成式語言-圖像預訓練

Let ViT Speak: Generative Language-Image Pre-training

May 1, 2026
作者: Yan Fang, Mengcheng Lan, Zilong Huang, Weixian Lei, Yunqing Zhao, Yujie Zhong, Yingchen Yu, Qi She, Yao Zhao, Yunchao Wei
cs.AI

摘要

本文提出生成式語言-圖像預訓練(GenLIP),這是一種專為多模態大型語言模型(MLLM)設計的、極簡主義的視覺Transformer(ViT)生成式預訓練框架。為使視覺編碼器更貼合LLM的自迴歸特性,GenLIP採用標準語言建模目標,直接訓練ViT根據視覺標記預測語言標記,無需對比式批次構建或額外文字解碼器。此設計具備三大優勢:(1)簡潔性:單一Transformer聯合建模視覺與文字標記;(2)可擴展性:在數據量和模型規模上均能有效擴展;(3)性能表現:在多模態基準測試中達到競爭力或更優結果。使用Recap-DataComp-1B的80億樣本訓練後,GenLIP在預訓練數據量大幅減少的情況下仍能媲美或超越強基線模型。透過對原生縱橫比的多解析度圖像進行持續預訓練,GenLIP在OCR與圖表理解等細節敏感任務上表現進一步提升,為MLLM的視覺編碼器奠定了堅實基礎。
English
In this paper, we present Generative Language-Image Pre-training (GenLIP), a minimalist generative pretraining framework for Vision Transformers (ViTs) designed for multimodal large language models (MLLMs). To better align vision encoders with the autoregressive nature of LLMs, GenLIP trains a ViT to predict language tokens directly from visual tokens using a standard language modeling objective, without contrastive batch construction or an additional text decoder. This design offers three key advantages: (1) Simplicity: a single transformer jointly models visual and textual tokens; (2) Scalability: it scales effectively with both data and model size; and (3) Performance: it achieves competitive or superior results across diverse multimodal benchmarks. Trained on 8B samples from Recap-DataComp-1B, GenLIP matches or surpasses strong baselines despite using substantially less pretraining data. After continued pretraining on multi-resolution images at native aspect ratios, GenLIP further improves on detail-sensitive tasks such as OCR and chart understanding, making it a strong foundation for vision encoders in MLLMs.
PDF91May 5, 2026