SPINAL(脊髓)——神经对齐层中的缩放定律与偏好集成
SPINAL -- Scaling-law and Preference Integration in Neural Alignment Layers
January 8, 2026
作者: Arion Das, Partha Pratim Saha, Amit Dhanda, Vinija Jain, Aman Chadha, Amitava Das
cs.AI
摘要
直接偏好优化(DPO)作为基于成对偏好对齐大语言模型的原理性可扩展方案,虽可替代RLHF,但其内部几何特征尚未明确,制约了模型审计、检查点比较与故障预测能力。我们提出SPINAL诊断法(神经对齐层的缩放律与偏好整合),通过逐层追踪局部结构变化,量化对齐过程如何重塑各深度的表征分布。跨模型族实验表明,DPO会产生集中于末段解码块(通常为21-30层)的分层校准效应——偏好梯度在此处对下一词元分布产生最直接影响。SPINAL将每个检查点编码为(层索引、收缩分数、传输分数)的深度轨迹:收缩分数通过谱分布尾端衰减速率(小微模态消失速度)衡量层表征压缩强度,数值越高表明表征越集中于少数有效方向;传输分数采用有界重叠度量相邻层间词元分布偏移程度,数值越低表征表征空间的转换路径越平滑。对齐模型呈现末层收缩强度跃升与传输值平稳下降的几何特征,对应策略质量的紧致化与稳定化;而未对齐模型则表现出高曲率、高熵值且几何不一致的深度路径。研究表明对齐操作具有几何局部性:末层网络主导偏好诱导的修正行为。SPINAL将此局部性转化为实用审计信号,可精准量化对齐操作的集中位置、强度表征及训练失稳临界点。
English
Direct Preference Optimization (DPO) is a principled, scalable alternative to RLHF for aligning large language models from pairwise preferences, but its internal geometric footprint remains undercharacterized, limiting audits, checkpoint comparisons, and failure prediction. We introduce SPINAL (Scaling-law and Preference Integration in Neural Alignment Layers), a diagnostic that measures how alignment reshapes representations across depth by tracing localized structural change layer by layer. Across model families, DPO produces a layerwise calibration effect concentrated in the final decoder blocks (often layers 21-30), where preference gradients most directly affect the next-token distribution. SPINAL encodes each checkpoint as a depth trace over (layer index, contraction score, transport score). The contraction score summarizes how quickly the tail of a layer's spectrum decays (how fast small modes vanish); higher values indicate stronger contraction into fewer effective directions. The transport score summarizes how much the token distribution shifts between adjacent layers using a bounded overlap measure; lower values indicate shorter, smoother steps through representation space. Aligned checkpoints show a late-layer ramp-up in contraction and a smooth reduction in transport, consistent with tightened and stabilized policy mass, while unaligned models trace higher-curvature, more entropic, and geometrically incoherent depth paths. Overall, alignment is geometrically localized: the final layers encode the dominant preference-induced corrections. SPINAL turns this localization into a practical audit signal, quantifying where alignment concentrates, how strongly it manifests, and when it begins to destabilize during training.