上下文預訓練:超越文件邊界的語言建模
In-Context Pretraining: Language Modeling Beyond Document Boundaries
October 16, 2023
作者: Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis
cs.AI
摘要
目前大型語言模型(LMs)被訓練以預測在給定文件前綴的情況下的標記,使它們能夠直接進行長篇生成和提示式任務,這些任務可以簡化為文件完成。現有的預訓練流程通過串聯隨機集合的短文件來訓練LMs,以創建輸入上下文,但先前的文件對於預測下一個文件並無信號。相反,我們提出了上下文預訓練(In-Context Pretraining)的新方法,其中語言模型在一系列相關文件上進行預訓練,從而明確鼓勵它們跨越文件邊界進行閱讀和推理。我們可以通過簡單地改變文件排序來進行上下文預訓練,使每個上下文包含相關文件,並直接應用現有的預訓練流程。然而,這個文件排序問題具有挑戰性。有數十億個文件,我們希望排序能夠最大程度地提高每個文件的上下文相似性,同時不重複任何數據。為此,我們引入了用於查找相關文件的近似算法,以及利用圖形遍歷算法構建連貫的輸入上下文。我們的實驗表明,上下文預訓練提供了一種簡單且可擴展的方法,可以顯著提高LMs的性能:我們在需要更複雜上下文推理的任務中觀察到明顯的改善,包括上下文學習(+8%)、閱讀理解(+15%)、對先前上下文的忠實度(+16%)、長篇上下文推理(+5%)和檢索增強(+9%)。
English
Large language models (LMs) are currently trained to predict tokens given
document prefixes, enabling them to directly perform long-form generation and
prompting-style tasks which can be reduced to document completion. Existing
pretraining pipelines train LMs by concatenating random sets of short documents
to create input contexts but the prior documents provide no signal for
predicting the next document. We instead present In-Context Pretraining, a new
approach where language models are pretrained on a sequence of related
documents, thereby explicitly encouraging them to read and reason across
document boundaries. We can do In-Context Pretraining by simply changing the
document ordering so that each context contains related documents, and directly
applying existing pretraining pipelines. However, this document sorting problem
is challenging. There are billions of documents and we would like the sort to
maximize contextual similarity for every document without repeating any data.
To do this, we introduce approximate algorithms for finding related documents
with efficient nearest neighbor search and constructing coherent input contexts
with a graph traversal algorithm. Our experiments show In-Context Pretraining
offers a simple and scalable approach to significantly enhance LMs'performance:
we see notable improvements in tasks that require more complex contextual
reasoning, including in-context learning (+8%), reading comprehension (+15%),
faithfulness to previous contexts (+16%), long-context reasoning (+5%), and
retrieval augmentation (+9%).