ChatPaper.aiChatPaper

熵自适应微调:解决置信冲突以缓解遗忘问题

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

January 5, 2026
作者: Muxi Diao, Lele Yang, Wuxuan Gong, Yutong Zhang, Zhonghao Yan, Yufei Han, Kongming Liang, Weiran Xu, Zhanyu Ma
cs.AI

摘要

监督微调(SFT)是领域适应的标准范式,但常伴随灾难性遗忘的代价。与之形成鲜明对比的是,策略强化学习(RL)能有效保留通用能力。我们探究这一差异并发现根本性的分布鸿沟:RL与模型内部信念保持一致,而SFT迫使模型拟合外部监督。这种错配常表现为“置信冲突”标记——即具有低概率但低熵的特征。在此类情形中,模型对其自身预测高度确信,却被强制学习相悖的标注真值,从而引发破坏性梯度更新。为解决该问题,我们提出熵自适应微调(EAFT)。与仅依赖预测概率的方法不同,EAFT利用标记级熵作为门控机制,以区分认知不确定性与知识冲突。这使得模型能够从不确定样本中学习,同时抑制冲突数据的梯度更新。在Qwen和GLM系列(参数量4B至32B)上开展的数学、医疗与智能体领域大规模实验验证了我们的假设。EAFT在保持标准SFT下游性能的同时,显著缓解了通用能力的退化。
English
Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as "Confident Conflicts" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.
PDF646January 9, 2026