ChatPaper.aiChatPaper

上下文感知元學習

Context-Aware Meta-Learning

October 17, 2023
作者: Christopher Fifty, Dennis Duan, Ronald G. Junkins, Ehsan Amid, Jure Leskovec, Christopher Ré, Sebastian Thrun
cs.AI

摘要

像ChatGPT這樣的大型語言模型展示了在推論過程中學習新概念的卓越能力,而無需進行任何微調。然而,訓練用於在推論過程中檢測新物體的視覺模型卻無法複製這種能力,而是表現不佳,或者需要在類似物體上進行元訓練和/或微調。在這項工作中,我們提出了一種模擬大型語言模型的元學習算法,通過在推論過程中學習新的視覺概念而無需微調。我們的方法利用凍結的預訓練特徵提取器,類似於上下文學習,將元學習重新塑造為對具有已知標籤的數據點和具有未知標籤的測試數據點進行序列建模。在11個元學習基準中的8個中,我們的方法 - 無需元訓練或微調 - 超過或與最先進的算法P>M>F相匹配,後者在這些基準上進行了元訓練。
English
Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning. However, visual models trained to detect new objects during inference have been unable to replicate this ability, and instead either perform poorly or require meta-training and/or fine-tuning on similar objects. In this work, we propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning. Our approach leverages a frozen pre-trained feature extractor, and analogous to in-context learning, recasts meta-learning as sequence modeling over datapoints with known labels and a test datapoint with an unknown label. On 8 out of 11 meta-learning benchmarks, our approach -- without meta-training or fine-tuning -- exceeds or matches the state-of-the-art algorithm, P>M>F, which is meta-trained on these benchmarks.
PDF171December 15, 2024