亚当斯定律:大型语言模型的文本频率定律
Adam's Law: Textual Frequency Law on Large Language Models
April 2, 2026
作者: Hongyuan Adam Lu, Z. L., Victor Wei, Zefan Zhang, Zhao Hong, Qiqi Xiang, Bowen Cao, Wai Lam
cs.AI
摘要
尽管文本频率已被证实与人类阅读速度的认知过程相关,但其与大型语言模型(LLMs)的关联性却鲜有研究。据我们所知,本文首次从文本数据频率这一尚未被充分探索的角度提出了新的研究方向。我们的框架包含三个核心模块:首先,本文提出文本频率定律(TFL),指出在LLMs的提示构建和微调过程中应优先使用高频文本数据。鉴于多数LLMs的训练数据未公开,我们建议通过在线资源估算句子级频率,并利用输入改写器将原始输入转化为更高频的文本表达。其次,我们提出文本频率蒸馏(TFD)方法,通过要求LLMs对数据集中的句子进行故事续写以扩展语料,并利用生成结果修正初始频率估计。最后,我们设计课程式文本频率训练(CTFT),按照句子频率由低到高的顺序对LLMs进行渐进式微调。我们在自建的文本频率配对数据集(TFPD)上进行了数学推理、机器翻译、常识推理和智能体工具调用实验,结果验证了该框架的有效性。
English
While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.