ChatPaper.aiChatPaper

上下文学习产生任务向量。

In-Context Learning Creates Task Vectors

October 24, 2023
作者: Roee Hendel, Mor Geva, Amir Globerson
cs.AI

摘要

在大型语言模型(LLMs)中,基于上下文的学习(ICL)已经成为一种强大的新学习范式。然而,其基本机制仍未被充分理解。特别是,将其映射到“标准”机器学习框架是具有挑战性的,标准框架中使用训练集S来找到某个假设类中最适合的函数f(x)。在这里,我们通过展示ICL学习的函数通常具有非常简单的结构来解决这个问题:它们对应于transformer LLM,其唯一输入是查询x和从训练集计算得到的单个“任务向量”。因此,ICL可以被看作是将S压缩成单个任务向量theta(S),然后使用此任务向量调制transformer以产生输出。我们通过跨多种模型和任务的全面实验来支持上述观点。
English
In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challenging to map it to the "standard" machine learning framework, where one uses a training set S to find a best-fitting function f(x) in some hypothesis class. Here we make progress on this problem by showing that the functions learned by ICL often have a very simple structure: they correspond to the transformer LLM whose only inputs are the query x and a single "task vector" calculated from the training set. Thus, ICL can be seen as compressing S into a single task vector theta(S) and then using this task vector to modulate the transformer to produce the output. We support the above claim via comprehensive experiments across a range of models and tasks.
PDF438December 15, 2024