ChatPaper.aiChatPaper

自回归语义视觉重建助力视觉语言模型更深入理解

Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better

June 10, 2025
作者: Dianyi Wang, Wei Song, Yikun Wang, Siyuan Wang, Kaicheng Yu, Zhongyu Wei, Jiaqi Wang
cs.AI

摘要

典型的大型视觉语言模型(LVLMs)仅对文本序列应用自回归监督,未能将视觉模态充分融入学习过程。这导致了三个主要局限:(1)无法利用无伴随描述的图像,(2)存在描述遗漏关键视觉细节的风险,以及(3)某些以视觉为中心的内容难以通过文本充分传达。因此,当前的LVLMs往往侧重于视觉与语言的对应,而可能忽视了细粒度的视觉信息。尽管先前有研究探索了自回归图像生成,但如何有效利用自回归视觉监督来增强图像理解仍是一个未解难题。本文提出了自回归语义视觉重建(ASVR),它能够在统一的自回归框架内实现视觉与文本模态的联合学习。我们发现,自回归重建图像的原始视觉外观不仅无法提升,甚至可能损害多模态理解。相反,自回归重建图像的语义表示则能持续提升理解能力。值得注意的是,即使模型以连续图像特征作为输入,它们也能有效重建离散的语义标记,从而在广泛的多模态理解基准上实现稳定且一致的改进。我们的方法在不同数据规模(556k-2M)和各类LLM骨干网络上均带来了显著的性能提升。具体而言,ASVR将LLaVA-1.5在14个多模态基准上的平均得分提高了5%。代码已发布于https://github.com/AlenjandroWang/ASVR。
English
Typical large vision-language models (LVLMs) apply autoregressive supervision solely to textual sequences, without fully incorporating the visual modality into the learning process. This results in three key limitations: (1) an inability to utilize images without accompanying captions, (2) the risk that captions omit critical visual details, and (3) the challenge that certain vision-centric content cannot be adequately conveyed through text. As a result, current LVLMs often prioritize vision-to-language alignment while potentially overlooking fine-grained visual information. While some prior works have explored autoregressive image generation, effectively leveraging autoregressive visual supervision to enhance image understanding remains an open challenge. In this paper, we introduce Autoregressive Semantic Visual Reconstruction (ASVR), which enables joint learning of visual and textual modalities within a unified autoregressive framework. We show that autoregressively reconstructing the raw visual appearance of images does not enhance and may even impair multimodal understanding. In contrast, autoregressively reconstructing the semantic representation of images consistently improves comprehension. Notably, we find that even when models are given continuous image features as input, they can effectively reconstruct discrete semantic tokens, resulting in stable and consistent improvements across a wide range of multimodal understanding benchmarks. Our approach delivers significant performance gains across varying data scales (556k-2M) and types of LLM bacbones. Specifically, ASVR improves LLaVA-1.5 by 5% in average scores across 14 multimodal benchmarks. The code is available at https://github.com/AlenjandroWang/ASVR.
PDF322June 11, 2025