ChatPaper.aiChatPaper

在嘈杂指令上的微调:对泛化与性能的影响

Fine-Tuning on Noisy Instructions: Effects on Generalization and Performance

October 3, 2025
作者: Ahmed Alajrami, Xingwei Tan, Nikolaos Aletras
cs.AI

摘要

指令微調在提升大型語言模型(LLMs)解決任務的能力方面發揮著至關重要的作用,增強了其在多種任務上生成有用回應的實用性。然而,先前的研究表明,這些模型對指令表述的細微變化極為敏感。本文探討了在指令微調數據中引入擾動是否能增強LLMs對噪聲指令的抵抗能力。我們重點研究了帶有擾動的指令微調,如移除停用詞或打亂詞序,如何影響LLMs在廣泛使用的基準測試(MMLU、BBH、GSM8K)原始版本及擾動版本上的表現。我們進一步評估了學習動態及模型行為的潛在變化。令人驚訝的是,我們的結果表明,在某些情況下,對擾動指令進行指令微調能夠提升下游任務的表現。這些發現強調了在指令微調中包含擾動指令的重要性,這可以使LLMs對用戶輸入的噪聲更具韌性。
English
Instruction-tuning plays a vital role in enhancing the task-solving abilities of large language models (LLMs), improving their usability in generating helpful responses on various tasks. However, previous work has demonstrated that they are sensitive to minor variations in instruction phrasing. In this paper, we explore whether introducing perturbations in instruction-tuning data can enhance LLMs' resistance against noisy instructions. We focus on how instruction-tuning with perturbations, such as removing stop words or shuffling words, affects LLMs' performance on the original and perturbed versions of widely-used benchmarks (MMLU, BBH, GSM8K). We further assess learning dynamics and potential shifts in model behavior. Surprisingly, our results suggest that instruction-tuning on perturbed instructions can, in some cases, improve downstream performance. These findings highlight the importance of including perturbed instructions in instruction-tuning, which can make LLMs more resilient to noisy user inputs.
PDF152October 7, 2025