ChatPaper.aiChatPaper

通过功能令牌实现大语言模型中的记忆检索与巩固

Memory Retrieval and Consolidation in Large Language Models through Function Tokens

October 9, 2025
作者: Shaohua Zhang, Yuan Lin, Hang Li
cs.AI

摘要

大型语言模型(LLMs)的显著成功源于其在预训练阶段将海量知识整合至记忆,并在推理过程中从记忆中检索的能力,从而实现了知识记忆、指令遵循及推理等高级功能。然而,LLMs中记忆检索与整合的机制仍鲜为人知。本文提出功能词假设以解释LLMs的工作原理:在推理过程中,功能词激活上下文中最具预测性的特征,并主导下一个词的预测(记忆检索)。在预训练阶段,预测功能词之后的下一个词(通常为内容词)增加了LLMs学习到的特征数量,并更新了模型参数(记忆整合)。此处的功能词大致对应于语言学中的功能词,包括标点符号、冠词、介词和连词,与内容词形成对比。我们提供了大量实验证据支持这一假设。通过二分图分析,我们展示了少数功能词激活了大部分特征。案例研究进一步揭示了功能词如何激活上下文中最具预测性的特征,以指导下一个词的预测。我们还发现,在预训练期间,训练损失主要由预测功能词之后的内容词所主导,这迫使功能词从上下文中选择最具预测性的特征。
English
The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.
PDF52October 10, 2025