ChatPaper.aiChatPaper

透過功能標記實現大型語言模型中的記憶提取與鞏固

Memory Retrieval and Consolidation in Large Language Models through Function Tokens

October 9, 2025
作者: Shaohua Zhang, Yuan Lin, Hang Li
cs.AI

摘要

大型語言模型(LLMs)的顯著成功,源於其在預訓練期間將大量知識整合至記憶中,並在推理過程中從記憶中檢索這些知識的能力,從而實現了知識記憶、指令遵循及推理等高級功能。然而,LLMs中記憶檢索與整合的機制仍鮮為人知。本文提出功能詞假說以解釋LLMs的運作原理:在推理階段,功能詞從上下文中激活最具預測性的特徵,並主導下一個詞的預測(記憶檢索)。在預訓練階段,預測緊隨功能詞之後的下一個詞(通常為內容詞),增加了LLMs所學習特徵的數量,並更新了模型參數(記憶整合)。此處的功能詞大致對應於語言學中的功能詞,包括標點符號、冠詞、介詞及連詞,與內容詞形成對比。我們提供了大量實驗證據支持這一假說。通過二分圖分析,我們展示了少數功能詞激活了大部分特徵。案例研究進一步揭示了功能詞如何從上下文中激活最具預測性的特徵,以指導下一個詞的預測。我們還發現,在預訓練期間,訓練損失主要由預測功能詞之後的內容詞所主導,這迫使功能詞從上下文中選擇最具預測性的特徵。
English
The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.
PDF52October 10, 2025