ChatPaper.aiChatPaper

OpenVision 2:面向多模态学习的生成式预训练视觉编码器家族

OpenVision 2: A Family of Generative Pretrained Visual Encoders for Multimodal Learning

September 1, 2025
作者: Yanqing Liu, Xianhang Li, Letian Zhang, Zirui Wang, Zeyu Zheng, Yuyin Zhou, Cihang Xie
cs.AI

摘要

本文对OpenVision的架构与损失函数设计进行了简化,以提升其训练效率。借鉴先前视觉-语言预训练工作CapPa与AIMv2,以及现代多模态设计如LLaVA,我们的改动直接明了:移除了文本编码器(连带对比损失),仅保留生成式训练信号——即字幕生成损失。我们将这一新版本命名为OpenVision 2。初步成果令人鼓舞:尽管简化了架构,OpenVision 2在广泛的多模态基准测试中与原模型性能相当,同时显著减少了训练时间与内存消耗。例如,采用ViT-L/14时,训练时间缩短约1.5倍(从83小时降至57小时),内存使用减少约1.8倍(从24.5GB降至13.8GB,相当于最大批量大小从2k增至8k)。这一卓越的训练效率使我们能够超越OpenVision中使用的最大视觉编码器规模,参数数量突破10亿大关。我们坚信,这种轻量级、纯生成式的范式对于未来多模态基础模型中的视觉编码器开发具有强大吸引力。
English
This paper provides a simplification on OpenVision's architecture and loss design for enhancing its training efficiency. Following the prior vision-language pretraining works CapPa and AIMv2, as well as modern multimodal designs like LLaVA, our changes are straightforward: we remove the text encoder (and therefore the contrastive loss), retaining only the captioning loss as a purely generative training signal. We name this new version OpenVision 2. The initial results are promising: despite this simplification, OpenVision 2 competitively matches the original model's performance on a broad set of multimodal benchmarks while substantially cutting both training time and memory consumption. For example, with ViT-L/14, it reduces training time by about 1.5x (from 83h to 57h), and memory usage by about 1.8x (from 24.5GB to 13.8GB, equivalently allowing the maximum batch size to grow from 2k to 8k). This superior training efficiency also allows us to scale far beyond the largest vision encoder used in OpenVision, reaching more than 1 billion parameters. We hold a strong belief that this lightweight, generative-only paradigm is compelling for future vision encoder development in multimodal foundation models.
PDF231September 3, 2025