ChatPaper.aiChatPaper

大型語言模型中的選擇性自我監督微調以促進泛化

Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models

February 12, 2025
作者: Sonam Gupta, Yatin Nandwani, Asaf Yehudai, Dinesh Khandelwal, Dinesh Raghu, Sachindra Joshi
cs.AI

摘要

在特定資料集上微調大型語言模型(LLMs)是一種常見的做法,可用來提高目標任務的性能。然而,這種性能提升通常會導致過度擬合,使模型在任務或訓練數據特徵方面過於專精,導致泛化能力下降。本文介紹了選擇性自我監督微調(S3FT),這是一種微調方法,能夠比標準監督微調(SFT)實現更好的性能並提高泛化能力。S3FT利用對一個查詢存在多個有效回應的事實。通過利用模型的正確回應,S3FT在微調階段減少了模型的專精化。S3FT首先通過部署適當的評判器從訓練集中識別出正確的模型回應。然後,它使用這些正確的模型回應以及剩餘樣本的黃金回應(或其改寫)來微調模型。通過在數學推理、Python編程和閱讀理解任務上進行實驗,證明了S3FT的有效性。結果顯示,標準SFT在多個基準測試中的平均性能下降可達4.4,如MMLU和TruthfulQA。相比之下,S3FT將這種下降減半,即2.5,表明其在泛化能力方面優於SFT,同時在微調任務上表現顯著更好。
English
Fine-tuning Large Language Models (LLMs) on specific datasets is a common practice to improve performance on target tasks. However, this performance gain often leads to overfitting, where the model becomes too specialized in either the task or the characteristics of the training data, resulting in a loss of generalization. This paper introduces Selective Self-to-Supervised Fine-Tuning (S3FT), a fine-tuning approach that achieves better performance than the standard supervised fine-tuning (SFT) while improving generalization. S3FT leverages the existence of multiple valid responses to a query. By utilizing the model's correct responses, S3FT reduces model specialization during the fine-tuning stage. S3FT first identifies the correct model responses from the training set by deploying an appropriate judge. Then, it fine-tunes the model using the correct model responses and the gold response (or its paraphrase) for the remaining samples. The effectiveness of S3FT is demonstrated through experiments on mathematical reasoning, Python programming and reading comprehension tasks. The results show that standard SFT can lead to an average performance drop of up to 4.4 on multiple benchmarks, such as MMLU and TruthfulQA. In contrast, S3FT reduces this drop by half, i.e. 2.5, indicating better generalization capabilities than SFT while performing significantly better on the fine-tuning tasks.

Summary

AI-Generated Summary

PDF92February 17, 2025