ChatPaper.aiChatPaper

TULIP:迈向统一的语言-图像预训练

TULIP: Towards Unified Language-Image Pretraining

March 19, 2025
作者: Zineng Tang, Long Lian, Seun Eisape, XuDong Wang, Roei Herzig, Adam Yala, Alane Suhr, Trevor Darrell, David M. Chan
cs.AI

摘要

尽管近期如CLIP和SigLIP等图文对比模型取得了显著成功,这些模型在处理需要高保真图像理解的视觉中心任务时仍面临挑战,例如计数、深度估计及细粒度物体识别。这些模型通过执行语言对齐,往往更重视高层语义而非视觉理解,从而削弱了其图像理解能力。另一方面,专注于视觉的模型虽擅长处理视觉信息,却在理解语言方面存在局限,限制了其在语言驱动任务中的灵活性。在本研究中,我们推出了TULIP,一个开源且可直接替换现有CLIP类模型的方案。我们的方法结合了生成式数据增强、强化的图像-图像与文本-文本对比学习,以及图像/文本重建正则化,旨在学习细粒度的视觉特征同时保持全局语义对齐。我们的方法可扩展至超过10亿参数,在多个基准测试中超越了现有最先进(SOTA)模型,在ImageNet-1K上确立了新的SOTA零样本性能,在RxRx1的线性探测少样本分类任务中较SigLIP提升高达2倍,并改进了视觉-语言模型,在MMVP上得分超过SigLIP的3倍。我们的代码/检查点可在https://tulip-berkeley.github.io获取。
English
Despite the recent success of image-text contrastive models like CLIP and SigLIP, these models often struggle with vision-centric tasks that demand high-fidelity image understanding, such as counting, depth estimation, and fine-grained object recognition. These models, by performing language alignment, tend to prioritize high-level semantics over visual understanding, weakening their image understanding. On the other hand, vision-focused models are great at processing visual information but struggle to understand language, limiting their flexibility for language-driven tasks. In this work, we introduce TULIP, an open-source, drop-in replacement for existing CLIP-like models. Our method leverages generative data augmentation, enhanced image-image and text-text contrastive learning, and image/text reconstruction regularization to learn fine-grained visual features while preserving global semantic alignment. Our approach, scaling to over 1B parameters, outperforms existing state-of-the-art (SOTA) models across multiple benchmarks, establishing a new SOTA zero-shot performance on ImageNet-1K, delivering up to a 2times enhancement over SigLIP on RxRx1 in linear probing for few-shot classification, and improving vision-language models, achieving over 3times higher scores than SigLIP on MMVP. Our code/checkpoints are available at https://tulip-berkeley.github.io

Summary

AI-Generated Summary

PDF482March 20, 2025