NanoKnow:如何洞悉语言模型的知识边界
NanoKnow: How to Know What Your Language Model Knows
February 23, 2026
作者: Lingwei Gu, Nour Jedidi, Jimmy Lin
cs.AI
摘要
大型语言模型如何知晓其知识?这一问题的解答长期面临挑战,因为预训练数据往往如同"黑箱"——既不可知也难以获取。近期发布的nanochat系列(拥有完全开放预训练数据的小型LLMs)通过透明展示模型参数化知识的来源解决了这一难题。为探究LLMs编码知识的机制,我们推出NanoKnow基准数据集,该数据集将Natural Questions和SQuAD中的问题按答案是否存在于nanochat预训练语料库进行划分。借助这种划分方式,我们得以清晰解析LLMs生成输出时所依赖的知识来源。为验证NanoKnow的实用性,我们使用八个nanochat检查点进行实验,发现:(1)闭卷准确率受预训练数据中答案出现频率的显著影响;(2)提供外部证据可缓解这种频率依赖性;(3)即使存在外部证据,模型对预训练阶段见过的答案仍表现更佳,表明参数化知识与外部知识具有互补性;(4)无关信息会产生负面影响,其干扰程度随无关上下文的数量和位置变化而加剧。所有NanoKnow资源已发布于https://github.com/castorini/NanoKnow。
English
How do large language models (LLMs) know what they know? Answering this question has been difficult because pre-training data is often a "black box" -- unknown or inaccessible. The recent release of nanochat -- a family of small LLMs with fully open pre-training data -- addresses this as it provides a transparent view into where a model's parametric knowledge comes from. Towards the goal of understanding how knowledge is encoded by LLMs, we release NanoKnow, a benchmark dataset that partitions questions from Natural Questions and SQuAD into splits based on whether their answers are present in nanochat's pre-training corpus. Using these splits, we can now properly disentangle the sources of knowledge that LLMs rely on when producing an output. To demonstrate NanoKnow's utility, we conduct experiments using eight nanochat checkpoints. Our findings show: (1) closed-book accuracy is strongly influenced by answer frequency in the pre-training data, (2) providing external evidence can mitigate this frequency dependence, (3) even with external evidence, models are more accurate when answers were seen during pre-training, demonstrating that parametric and external knowledge are complementary, and (4) non-relevant information is harmful, with accuracy decreasing based on both the position and the number of non-relevant contexts. We release all NanoKnow artifacts at https://github.com/castorini/NanoKnow.