ChatPaper.aiChatPaper

FVG-PT:面向视觉语言模型的自适应前景视图引导提示调优

FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

March 9, 2026
作者: Haoyang Li, Liang Wang, Siyu Zhou, Jiacheng Sun, Jing Jiang, Chao Wang, Guodong Long, Yan Peng
cs.AI

摘要

基於CLIP的提示調優技術能夠使預訓練視覺語言模型高效適應下游任務。現有研究雖取得顯著進展,但對調優過程中模型內部注意力表徵的變化關注有限。本文將提示調優預測的失效模式歸因於視覺編碼器前景注意力的偏移,據此提出前景視角引導提示調優框架(FVG-PT),通過自適應即插即用的前景注意力引導模組來緩解此類偏移。具體而言,FVG-PT引入可學習的前景可靠性閘門以自動提升前景視角質量,應用前景蒸餾補償模組引導視覺注意力聚焦前景,並進一步通過先驗校準模組緩解因過度關注前景導致的泛化性能衰退。在多種骨幹模型與數據集上的實驗驗證了FVG-PT的有效性與兼容性。代碼已開源於:https://github.com/JREion/FVG-PT
English
CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT. Codes are available at: https://github.com/JREion/FVG-PT
PDF52March 15, 2026