ChatPaper.aiChatPaper

基於大型語言模型的時間序列預測高效模型選擇

Efficient Model Selection for Time Series Forecasting via LLMs

April 2, 2025
作者: Wang Wei, Tiankai Yang, Hongjie Chen, Ryan A. Rossi, Yue Zhao, Franck Dernoncourt, Hoda Eldardiry
cs.AI

摘要

模型選擇是時間序列預測中的關鍵步驟,傳統上需要對各種數據集進行廣泛的性能評估。元學習方法旨在自動化這一過程,但它們通常依賴於預先構建的性能矩陣,而這些矩陣的構建成本高昂。在本研究中,我們提出利用大型語言模型(LLMs)作為模型選擇的輕量級替代方案。我們的方法通過利用LLMs的內在知識和推理能力,消除了對顯式性能矩陣的需求。通過對LLaMA、GPT和Gemini進行的大量實驗,我們證明該方法優於傳統的元學習技術和啟發式基線,同時顯著降低了計算開銷。這些發現凸顯了LLMs在時間序列預測中高效模型選擇的潛力。
English
Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on pre-constructed performance matrices, which are costly to build. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.

Summary

AI-Generated Summary

PDF162April 4, 2025