ProCLIP:基於LLM嵌入器的漸進式視覺-語言對齊
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
October 21, 2025
作者: Xiaoxing Hu, Kaicheng Yang, Ziyong Feng, Qi Ming, Zonghao Guo, Xiang An, Ziyong Feng, Junchi Yan, Xue Yang
cs.AI
摘要
原始的CLIP文本編碼器受限於最大輸入長度為77個token,這限制了其有效處理長文本及進行細粒度語義理解的能力。此外,CLIP文本編碼器缺乏對多語言輸入的支持。這些限制顯著地縮小了其在更廣泛任務中的適用性。近期研究嘗試以基於大型語言模型(LLM)的嵌入器替換CLIP文本編碼器,以增強其在處理長文本、多語言理解及細粒度語義理解方面的能力。然而,由於LLM的表示空間與CLIP的視覺-語言空間是獨立預訓練的,缺乏對齊先驗,直接使用對比學習進行對齊可能會破壞CLIP圖像編碼器內在的視覺-語言對齊,導致預訓練期間獲得的知識未能充分利用。為解決這一挑戰,我們提出了ProCLIP,一個基於課程學習的漸進式視覺-語言對齊框架,旨在有效對齊CLIP圖像編碼器與基於LLM的嵌入器。具體而言,ProCLIP首先從CLIP的文本編碼器中蒸餾知識至基於LLM的嵌入器,以利用CLIP豐富的預訓練知識,同時在LLM嵌入器與CLIP圖像編碼器之間建立初步對齊。隨後,ProCLIP通過圖像-文本對比微調進一步對齊CLIP圖像編碼器與基於LLM的嵌入器,並採用自蒸餾正則化來避免過擬合。為了實現更有效的對齊,在表示繼承和對比微調過程中,採用了實例語義對齊損失和嵌入結構對齊損失。代碼已公開於https://github.com/VisionXLab/ProCLIP。
English
The original CLIP text encoder is limited by a maximum input length of 77
tokens, which hampers its ability to effectively process long texts and perform
fine-grained semantic understanding. In addition, the CLIP text encoder lacks
support for multilingual inputs. All these limitations significantly restrict
its applicability across a broader range of tasks. Recent studies have
attempted to replace the CLIP text encoder with an LLM-based embedder to
enhance its ability in processing long texts, multilingual understanding, and
fine-grained semantic comprehension. However, because the representation spaces
of LLMs and the vision-language space of CLIP are pretrained independently
without alignment priors, direct alignment using contrastive learning can
disrupt the intrinsic vision-language alignment in the CLIP image encoder,
leading to an underutilization of the knowledge acquired during pre-training.
To address this challenge, we propose ProCLIP, a curriculum learning-based
progressive vision-language alignment framework to effectively align the CLIP
image encoder with an LLM-based embedder. Specifically, ProCLIP first distills
knowledge from CLIP's text encoder into the LLM-based embedder to leverage
CLIP's rich pretrained knowledge while establishing initial alignment between
the LLM embedder and CLIP image encoder. Subsequently, ProCLIP further aligns
the CLIP image encoder with the LLM-based embedder through image-text
contrastive tuning, employing self-distillation regularization to avoid
overfitting. To achieve a more effective alignment, instance semantic alignment
loss and embedding structure alignment loss are employed during representation
inheritance and contrastive tuning. The Code is available at
https://github.com/VisionXLab/ProCLIP