ChatPaper.aiChatPaper

檢索增強對比視覺-文本模型

Retrieval-Enhanced Contrastive Vision-Text Models

June 12, 2023
作者: Ahmet Iscen, Mathilde Caron, Alireza Fathi, Cordelia Schmid
cs.AI

摘要

對比式圖像-文本模型,如CLIP,是許多最先進系統的基石。儘管它們擅長識別常見的通用概念,但在稀有甚至在預訓練數據集中缺少的細粒度實體上仍然存在困難。因此,它們成功的關鍵因素之一是使用大規模策劃的預訓練數據,旨在在預訓練階段擴展它們可以記憶的概念集。在這項工作中,我們探索了將細粒度知識直接編碼到模型參數的替代方法:我們改為訓練模型從外部記憶中檢索此知識。具體而言,我們建議為現有的視覺-文本模型提供在推理時從記憶中獲取的跨模態信息以優化其嵌入,這大大提高了它們的零樣本預測。值得注意的是,我們展示可以通過在凍結的CLIP頂部使用輕量級、單層的融合Transformer來實現這一點。我們的實驗證實,我們的檢索增強對比(RECO)訓練顯著提高了CLIP在幾個具有挑戰性的細粒度任務上的性能:例如,在Stanford Cars上提高了+10.9,在CUB-2011上提高了+10.2,在最近的OVEN基準上提高了+7.3。
English
Contrastive image-text models such as CLIP form the building blocks of many state-of-the-art systems. While they excel at recognizing common generic concepts, they still struggle on fine-grained entities which are rare, or even absent from the pre-training dataset. Hence, a key ingredient to their success has been the use of large-scale curated pre-training data aiming at expanding the set of concepts that they can memorize during the pre-training stage. In this work, we explore an alternative to encoding fine-grained knowledge directly into the model's parameters: we instead train the model to retrieve this knowledge from an external memory. Specifically, we propose to equip existing vision-text models with the ability to refine their embedding with cross-modal retrieved information from a memory at inference time, which greatly improves their zero-shot predictions. Remarkably, we show that this can be done with a light-weight, single-layer, fusion transformer on top of a frozen CLIP. Our experiments validate that our retrieval-enhanced contrastive (RECO) training improves CLIP performance substantially on several challenging fine-grained tasks: for example +10.9 on Stanford Cars, +10.2 on CUB-2011 and +7.3 on the recent OVEN benchmark.
PDF70December 15, 2024