语言模型的物理学:第四部分第一节,架构设计与规范层的神奇之处
Physics of Language Models: Part 4.1, Architecture Design and the Magic of Canon Layers
December 19, 2025
作者: Zeyuan Allen-Zhu
cs.AI
摘要
理解语言模型的架构差异颇具挑战性,尤其在学术级预训练规模下(例如13亿参数、1000亿词元),结果往往被噪声和随机性主导。为突破此局限,我们引入受控合成预训练任务,以隔离并评估模型的核心能力。在此框架中,我们发现了CANON层——以音乐术语"卡农"命名的轻量级架构组件,可促进相邻词元间的横向信息流动。该层通过计算邻近词元表征的加权和,能无缝集成至Transformer、线性注意力、状态空间模型或任何序列架构中。
我们展示了12项关键成果:卡农层如何将推理深度提升2倍、拓展推理广度、增强知识操纵能力等。它们能使NoPE等弱架构达到RoPE水平,令线性注意力模型比肩Mamba2/GDN等前沿线性模型——这些结论均通过合成任务与真实学术级预训练验证。该合成实验场为隔离学术规模下常被掩盖的核心模型能力提供了经济且规范的研究路径。借助无限高质量数据,它甚至能预测未来架构在训练流程优化(如改进数据策展或基于强化学习的训后优化)后的行为表现,从而解锁更深层次的推理与层级推断能力。
English
Understanding architectural differences in language models is challenging, especially at academic-scale pretraining (e.g., 1.3B parameters, 100B tokens), where results are often dominated by noise and randomness. To overcome this, we introduce controlled synthetic pretraining tasks that isolate and evaluate core model capabilities. Within this framework, we discover CANON LAYERS: lightweight architectural components -- named after the musical term "canon" -- that promote horizontal information flow across neighboring tokens. Canon layers compute weighted sums of nearby token representations and integrate seamlessly into Transformers, linear attention, state-space models, or any sequence architecture.
We present 12 key results. This includes how Canon layers enhance reasoning depth (e.g., by 2times), reasoning breadth, knowledge manipulation, etc. They lift weak architectures like NoPE to match RoPE, and linear attention to rival SOTA linear models like Mamba2/GDN -- validated both through synthetic tasks and real-world academic-scale pretraining. This synthetic playground offers an economical, principled path to isolate core model capabilities often obscured at academic scales. Equipped with infinite high-quality data, it may even PREDICT how future architectures will behave as training pipelines improve -- e.g., through better data curation or RL-based post-training -- unlocking deeper reasoning and hierarchical inference.