SFTMix:利用Mixup配方提升語言模型調校
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe
October 7, 2024
作者: Yuxin Xiao, Shujian Zhang, Wenxuan Zhou, Marzyeh Ghassemi, Sanqiang Zhao
cs.AI
摘要
為了在大型語言模型(LLMs)中誘導所需的行為,以應對互動驅動任務,指令調整階段通常使用下一個標記預測(NTP)損失來訓練LLMs,並使用指令-回應對。先前的研究旨在改善指令調整性能,通常強調需要更高質量的監督微調(SFT)數據集,這通常涉及使用專有LLMs進行昂貴的數據篩選,或者通過人工標註者進行勞動密集型數據生成。然而,這些方法未能充分利用數據集的內在特性,導致高計算和勞動成本,從而限制了可擴展性和性能增益。在本文中,我們提出了SFTMix,一種新穎的配方,可以提升指令調整性能,超越傳統的NTP範式,而無需經過精心策劃的數據集。觀察到LLMs在語義表示空間中表現出不均勻的信心,我們認為具有不同信心水平的示例應在指令調整過程中發揮不同作用。基於這一洞察,SFTMix利用訓練動態來識別具有不同信心水平的示例,然後應用基於Mixup的正則化來減輕對自信示例的過度擬合,同時傳播監督信號以改善對相對不自信的示例的學習。這種方法使SFTMix在廣泛的指令遵循和醫療保健領域特定的SFT任務中明顯優於NTP,展示了其適應不同LLM系列並可擴展到任何大小數據集的能力。全面的消融研究進一步驗證了SFTMix設計選擇的穩健性,強調了其在不同LLMs和數據集中持續增強性能的多功能性,適用於更廣泛的自然語言處理應用。
English
To induce desired behaviors in large language models (LLMs) for
interaction-driven tasks, the instruction-tuning stage typically trains LLMs on
instruction-response pairs using the next-token prediction (NTP) loss. Previous
work aiming to improve instruction-tuning performance often emphasizes the need
for higher-quality supervised fine-tuning (SFT) datasets, which typically
involves expensive data filtering with proprietary LLMs or labor-intensive data
generation by human annotators. However, these approaches do not fully leverage
the datasets' intrinsic properties, resulting in high computational and labor
costs, thereby limiting scalability and performance gains. In this paper, we
propose SFTMix, a novel recipe that elevates instruction-tuning performance
beyond the conventional NTP paradigm, without the need for well-curated
datasets. Observing that LLMs exhibit uneven confidence across the semantic
representation space, we argue that examples with different confidence levels
should play distinct roles during the instruction-tuning process. Based on this
insight, SFTMix leverages training dynamics to identify examples with varying
confidence levels, then applies a Mixup-based regularization to mitigate
overfitting on confident examples while propagating supervision signals to
improve learning on relatively unconfident ones. This approach enables SFTMix
to significantly outperform NTP across a wide range of instruction-following
and healthcare domain-specific SFT tasks, demonstrating its adaptability to
diverse LLM families and scalability to datasets of any size. Comprehensive
ablation studies further verify the robustness of SFTMix's design choices,
underscoring its versatility in consistently enhancing performance across
different LLMs and datasets in broader natural language processing
applications.Summary
AI-Generated Summary