μP在宽度-深度缩放下的谱条件
Spectral Condition for μP under Width-Depth Scaling
February 28, 2026
作者: Chenyu Zheng, Rongzhen Wang, Xinyu Zhang, Chongxuan Li
cs.AI
摘要
生成式基础模型在宽度和深度维度上的规模持续扩大,这对跨模型尺寸的稳定特征学习和超参数可靠迁移提出了严峻挑战。虽然最大更新参数化(μP)已为宽度缩放提供了针对这两个问题的理论解决方案,但现有针对宽度-深度联合缩放的研究仍存在碎片化、依赖特定架构与优化器、且常需复杂理论支撑的局限。本研究提出了一种简单统一的谱框架来解决联合缩放下的μP问题。通过分析不同块深度的残差网络,我们首先提出谱μP条件,精确刻画了权重及其每步更新量的范数应如何随宽度和深度缩放,将此前分散的μP表述统一为特例。基于此条件,我们进一步推导出适用于广泛优化器类别的μP通用实现方案,将谱约束转化为具体的超参数参数化方法。该方案不仅复现了现有μP表述(如SGD和AdamW),还能自然扩展至更多优化器。最终在GPT-2风格语言模型上的实验表明,所提出的谱μP条件能在宽度-深度缩放中保持稳定的特征学习,并实现超参数的鲁棒迁移。
English
Generative foundation models are increasingly scaled in both width and depth, posing significant challenges for stable feature learning and reliable hyperparameter (HP) transfer across model sizes. While maximal update parameterization (μP) has provided a principled solution to both problems for width scaling, existing extensions to the joint width-depth scaling regime remain fragmented, architecture- and optimizer-specific, and often rely on technically involved theories. In this work, we develop a simple and unified spectral framework for μP under joint width-depth scaling. Considering residual networks of varying block depths, we first introduce a spectral μP condition that precisely characterizes how the norms of weights and their per-step updates should scale with width and depth, unifying previously disparate μP formulations as special cases. Building on this condition, we then derive a general recipe for implementing μP across a broad class of optimizers by mapping the spectral constraints to concrete HP parameterizations. This approach not only recovers existing μP formulations (e.g., for SGD and AdamW) but also naturally extends to a wider range of optimizers. Finally, experiments on GPT-2 style language models demonstrate that the proposed spectral μP condition preserves stable feature learning and enables robust HP transfer under width-depth scaling.