RoPE背后:因果掩码如何编码位置信息?
Behind RoPE: How Does Causal Mask Encode Positional Information?
September 25, 2025
作者: Junu Kim, Xiao Liu, Zhenghao Lin, Lei Ji, Yeyun Gong, Edward Choi
cs.AI
摘要
虽然显式位置编码(如RoPE)是Transformer解码器中位置信息的主要来源,但因果掩码同样提供了位置信息。在本研究中,我们证明了因果掩码能够在没有参数或输入因果依赖的情况下,诱导出注意力分数中的位置依赖模式。我们的理论分析表明,这种诱导的注意力模式倾向于偏好邻近的查询-键对,这与常见位置编码的行为相呼应。实证分析进一步证实,经过训练的模型展现出相同的行为,学习到的参数进一步放大了这些模式。值得注意的是,我们发现因果掩码与RoPE的相互作用将RoPE的相对注意力分数模式扭曲为非相对模式。我们在现代大型语言模型中一致观察到了这一效应,这提示了将因果掩码视为与显式位置编码同等重要的位置信息来源的重要性。
English
While explicit positional encodings such as RoPE are a primary source of
positional information in Transformer decoders, the causal mask also provides
positional information. In this work, we prove that the causal mask can induce
position-dependent patterns in attention scores, even without parameters or
causal dependency in the input. Our theoretical analysis indicates that the
induced attention pattern tends to favor nearby query-key pairs, mirroring the
behavior of common positional encodings. Empirical analysis confirms that
trained models exhibit the same behavior, with learned parameters further
amplifying these patterns. Notably, we found that the interaction of causal
mask and RoPE distorts RoPE's relative attention score patterns into
non-relative ones. We consistently observed this effect in modern large
language models, suggesting the importance of considering the causal mask as a
source of positional information alongside explicit positional encodings.