在視覺之前學會觀看:解構語言預訓練中的大語言模型視覺先驗
Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training
September 30, 2025
作者: Junlin Han, Shengbang Tong, David Fan, Yufan Ren, Koustuv Sinha, Philip Torr, Filippos Kokkinos
cs.AI
摘要
大型語言模型(LLMs)儘管僅基於文本進行訓練,卻意外地發展出了豐富的視覺先驗知識。這些先驗知識使得在相對少量的多模態數據下,能夠解鎖視覺任務的潛在能力,甚至在某些情況下,無需見過任何圖像也能執行視覺任務。通過系統性分析,我們揭示了視覺先驗——即在語言預訓練過程中獲得的關於視覺世界的隱含、湧現知識——是由可分離的感知與推理先驗構成,各自具有獨特的擴展趨勢和來源。我們發現,LLM的潛在視覺推理能力主要通過對推理密集型數據(如代碼、數學、學術文本)的預訓練而發展,並呈現漸進式擴展。這種從語言預訓練中獲得的推理先驗是可遷移且普遍適用於視覺推理的。相比之下,感知先驗則更廣泛地來自於多樣化的語料庫,且感知能力對視覺編碼器和視覺指令調優數據更為敏感。同時,描述視覺世界的文本被證明至關重要,但其性能影響迅速達到飽和。基於這些洞察,我們提出了一種以數據為中心的預訓練視覺感知LLM的方法,並在1T token規模的預訓練中進行了驗證。我們的研究結果基於超過100項對照實驗,耗費了500,000 GPU小時,涵蓋了從LLM預訓練到視覺對齊及監督式多模態微調的完整MLLM構建流程,跨越五種模型規模、多種數據類別與混合方式,以及多種適應設置。除了主要發現外,我們還提出並探討了若干假設,並引入了多層次存在基準(MLE-Bench)。整體而言,這項工作提供了一種從語言預訓練中刻意培養視覺先驗的新方法,為下一代多模態LLM的發展鋪平了道路。
English
Large Language Models (LLMs), despite being trained on text alone,
surprisingly develop rich visual priors. These priors allow latent visual
capabilities to be unlocked for vision tasks with a relatively small amount of
multimodal data, and in some cases, to perform visual tasks without ever having
seen an image. Through systematic analysis, we reveal that visual priors-the
implicit, emergent knowledge about the visual world acquired during language
pre-training-are composed of separable perception and reasoning priors with
unique scaling trends and origins. We show that an LLM's latent visual
reasoning ability is predominantly developed by pre-training on
reasoning-centric data (e.g., code, math, academia) and scales progressively.
This reasoning prior acquired from language pre-training is transferable and
universally applicable to visual reasoning. In contrast, a perception prior
emerges more diffusely from broad corpora, and perception ability is more
sensitive to the vision encoder and visual instruction tuning data. In
parallel, text describing the visual world proves crucial, though its
performance impact saturates rapidly. Leveraging these insights, we propose a
data-centric recipe for pre-training vision-aware LLMs and verify it in 1T
token scale pre-training. Our findings are grounded in over 100 controlled
experiments consuming 500,000 GPU-hours, spanning the full MLLM construction
pipeline-from LLM pre-training to visual alignment and supervised multimodal
fine-tuning-across five model scales, a wide range of data categories and
mixtures, and multiple adaptation setups. Along with our main findings, we
propose and investigate several hypotheses, and introduce the Multi-Level
Existence Bench (MLE-Bench). Together, this work provides a new way of
deliberately cultivating visual priors from language pre-training, paving the
way for the next generation of multimodal LLMs.