ChatPaper.aiChatPaper

從文字到數字:您的大型語言模型在提供上下文示例時暗中是一個能幹的回歸器

From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

April 11, 2024
作者: Robert Vacareanu, Vlad-Andrei Negru, Vasile Suciu, Mihai Surdeanu
cs.AI

摘要

我們分析了預先訓練的大型語言模型(例如Llama2、GPT-4、Claude 3等)在提供上下文示例時,在沒有額外訓練或梯度更新的情況下,能夠進行線性和非線性回歸的表現。我們的研究發現,一些大型語言模型(例如GPT-4、Claude 3)能夠執行回歸任務,其表現與傳統監督方法(如隨機森林、Bagging或梯度提升)不相上下,甚至表現更好。例如,在具有挑戰性的Friedman#2回歸數據集上,Claude 3的表現優於許多監督方法,如AdaBoost、支持向量機(SVM)、隨機森林、K最近鄰(KNN)或梯度提升。然後,我們研究了大型語言模型的表現如何隨著上下文示例數量的增加而提升。我們借鑒了在線學習中的遺憾概念,並從實證角度表明,大型語言模型能夠獲得次線性的遺憾。
English
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret.

Summary

AI-Generated Summary

PDF211December 15, 2024