CausalLM 不适合于上下文学习。
CausalLM is not optimal for in-context learning
August 14, 2023
作者: Nan Ding, Tomer Levinboim, Jialin Wu, Sebastian Goodman, Radu Soricut
cs.AI
摘要
最近的实证证据表明,在使用前缀语言模型(prefixLM)时,基于Transformer的上下文学习表现更好。在前缀语言模型中,所有上下文样本都可以相互关注,相比之下,因果语言模型(causalLM)使用自回归注意力,禁止上下文样本关注未来样本。虽然这一结果很直观,但从理论角度并不为人所理解。本文采用理论方法,分析了在特定参数构建下前缀语言模型和因果语言模型的收敛行为。我们的分析显示,这两种语言模型类型都以线性速率收敛到它们的稳定点,但前缀语言模型收敛到线性回归的最优解,而因果语言模型的收敛动态遵循在线梯度下降算法,即使样本数量无限增长,也不能保证达到最优解。我们通过对合成和真实任务以及使用各种类型的Transformer进行的实证实验来补充我们的理论观点。我们的实验验证了在所有设置中,因果语言模型在性能上始终不如前缀语言模型。
English
Recent empirical evidence indicates that transformer based in-context
learning performs better when using a prefix language model (prefixLM), in
which in-context samples can all attend to each other, compared to causal
language models (causalLM), which use auto-regressive attention that prohibits
in-context samples to attend to future samples. While this result is intuitive,
it is not understood from a theoretical perspective. In this paper we take a
theoretical approach and analyze the convergence behavior of prefixLM and
causalLM under a certain parameter construction. Our analysis shows that both
LM types converge to their stationary points at a linear rate, but that while
prefixLM converges to the optimal solution of linear regression, causalLM
convergence dynamics follows that of an online gradient descent algorithm,
which is not guaranteed to be optimal even as the number of samples grows
infinitely. We supplement our theoretical claims with empirical experiments
over synthetic and real tasks and using various types of transformers. Our
experiments verify that causalLM consistently underperforms prefixLM in all
settings.