ChatPaper.aiChatPaper

基於雙頭優化的視覺-語言模型簡易半監督知識蒸餾

Simple Semi-supervised Knowledge Distillation from Vision-Language Models via texttt{D}ual-texttt{H}ead texttt{O}ptimization

May 12, 2025
作者: Seongjae Kang, Dong Bok Lee, Hyungjoon Jang, Sung Ju Hwang
cs.AI

摘要

視覺語言模型(VLMs)通過利用豐富的文本信息並以最少的標記數據,在多樣化的任務中取得了顯著成功。然而,在資源受限的環境中部署這類大型模型仍然具有挑戰性。知識蒸餾(KD)為這一問題提供了一種成熟的解決方案;然而,近期從VLMs中提取知識的KD方法通常涉及多階段訓練或額外的調優,這增加了計算開銷和優化複雜性。在本文中,我們提出了\texttt{D}ual-\texttt{H}ead \texttt{O}ptimization(\texttt{DHO})——一種簡單而有效的KD框架,在半監督設置下將知識從VLMs轉移到緊湊的任務特定模型中。具體而言,我們引入了雙預測頭,它們分別從標記數據和教師預測中獨立學習,並在推理過程中線性組合其輸出。我們觀察到,DHO緩解了監督信號與蒸餾信號之間的梯度衝突,使得特徵學習比單頭KD基線更為有效。因此,大量實驗表明,DHO在多個領域和細粒度數據集上始終優於基線。值得注意的是,在ImageNet上,它達到了最先進的性能,在使用1%和10%標記數據時,分別將準確率提高了3%和0.1%,同時使用了更少的參數。
English
Vision-language models (VLMs) have achieved remarkable success across diverse tasks by leveraging rich textual information with minimal labeled data. However, deploying such large models remains challenging, particularly in resource-constrained environments. Knowledge distillation (KD) offers a well-established solution to this problem; however, recent KD approaches from VLMs often involve multi-stage training or additional tuning, increasing computational overhead and optimization complexity. In this paper, we propose texttt{D}ual-texttt{H}ead texttt{O}ptimization (texttt{DHO}) -- a simple yet effective KD framework that transfers knowledge from VLMs to compact, task-specific models in semi-supervised settings. Specifically, we introduce dual prediction heads that independently learn from labeled data and teacher predictions, and propose to linearly combine their outputs during inference. We observe that DHO mitigates gradient conflicts between supervised and distillation signals, enabling more effective feature learning than single-head KD baselines. As a result, extensive experiments show that DHO consistently outperforms baselines across multiple domains and fine-grained datasets. Notably, on ImageNet, it achieves state-of-the-art performance, improving accuracy by 3% and 0.1% with 1% and 10% labeled data, respectively, while using fewer parameters.

Summary

AI-Generated Summary

PDF183May 19, 2025