ChatPaper.aiChatPaper

熵適應性微調:解決置信度衝突以減緩遺忘問題

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

January 5, 2026
作者: Muxi Diao, Lele Yang, Wuxuan Gong, Yutong Zhang, Zhonghao Yan, Yufei Han, Kongming Liang, Weiran Xu, Zhanyu Ma
cs.AI

摘要

監督式微調(SFT)雖是領域適應的標準範式,卻常伴隨災難性遺忘的代價。與此形成鮮明對比的是,策略性強化學習(RL)能有效保留模型的通用能力。我們深入探究此差異,發現關鍵在於分佈差距:RL與模型的內部信念保持一致,而SFT則強制模型擬合外部監督。這種錯配常體現為「置信衝突」標記——其特徵是低概率但同時具備低熵值。在此類情境中,模型對自身預測高度自信,卻被迫學習相悖的標註真值,從而引發破壞性的梯度更新。為解決此問題,我們提出熵自適應微調(EAFT)。有別於僅依賴預測概率的方法,EAFT利用標記層級的熵值作為門控機制,以區分認知不確定性與知識衝突。這使模型能從不確定樣本中學習,同時抑制衝突數據的梯度更新。在Qwen與GLM系列模型(參數量涵蓋40億至320億)上進行的數學、醫療及智能體領域廣泛實驗驗證了我們的假設:EAFT在保持與標準SFT相當的下游任務性能的同時,顯著減緩了通用能力的退化。
English
Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as "Confident Conflicts" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.
PDF646January 9, 2026