ChatPaper.aiChatPaper

調校指令的語言模型是更好的知識學習者。

Instruction-tuned Language Models are Better Knowledge Learners

February 20, 2024
作者: Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Victoria Lin, Wen-tau Yih, Srinivasan Iyer
cs.AI

摘要

為了使基於大型語言模型(LLM)的助理能夠有效地適應不斷變化的信息需求,必須能夠通過持續在新數據上進行訓練來更新它們的事實知識。這樣做的標準方法包括在新文檔上進行持續預訓練,然後進行問答(QA)配對的指導調整。然而,我們發現使用這種方法訓練的LLM在回答問題時存在困難,即使文檔的困惑度已被最小化。我們發現QA配對通常比較簡單,而文檔則更為複雜,以精巧的方式將許多事實陳述編織在一起。因此,我們假設讓LLM在持續預訓練文檔之前先接觸QA配對將是有益的,這樣從複雜文檔中編碼知識的過程將考慮到如何通過問題訪問這些知識。基於此,我們提出了預指導調整(PIT)方法,該方法在訓練文檔之前先對問題進行指導調整。這與標準指導調整形成對比,後者是在訓練文檔後學習如何提取知識。大量實驗和消融研究表明,PIT顯著增強了LLM吸收新文檔知識的能力,優於標準指導調整17.8%。
English
In order for large language model (LLM)-based assistants to effectively adapt to evolving information needs, it must be possible to update their factual knowledge through continued training on new data. The standard recipe for doing so involves continued pre-training on new documents followed by instruction-tuning on question-answer (QA) pairs. However, we find that LLMs trained with this recipe struggle to answer questions, even though the perplexity of documents is minimized. We found that QA pairs are generally straightforward, while documents are more complex, weaving many factual statements together in an intricate manner. Therefore, we hypothesize that it is beneficial to expose LLMs to QA pairs before continued pre-training on documents so that the process of encoding knowledge from complex documents takes into account how this knowledge is accessed through questions. Based on this, we propose pre-instruction-tuning (PIT), a method that instruction-tunes on questions prior to training on documents. This contrasts with standard instruction-tuning, which learns how to extract knowledge after training on documents. Extensive experiments and ablation studies demonstrate that PIT significantly enhances the ability of LLMs to absorb knowledge from new documents, outperforming standard instruction-tuning by 17.8%.

Summary

AI-Generated Summary

PDF271December 15, 2024