ChatPaper.aiChatPaper

透過基於補丁的提示與分解方法,利用大型語言模型進行時間序列預測

Forecasting Time Series with LLMs via Patch-Based Prompting and Decomposition

June 15, 2025
作者: Mayank Bumb, Anshul Vemulapalli, Sri Harsha Vardhan Prasad Jella, Anish Gupta, An La, Ryan A. Rossi, Hongjie Chen, Franck Dernoncourt, Nesreen K. Ahmed, Yu Wang
cs.AI

摘要

近期大型語言模型(LLMs)的進展展現了精確且高效時間序列分析的新可能性,但先前的研究往往需要大量微調,且/或忽略了序列間的相關性。在本研究中,我們探索了簡單而靈活的基於提示的策略,使LLMs能夠進行時間序列預測,而無需大量重新訓練或使用複雜的外部架構。通過探索利用時間序列分解、基於片段的標記化及基於相似性的鄰域增強的專門提示方法,我們發現可以在保持簡便性的同時提升LLM的預測質量,且僅需最少的數據預處理。為此,我們提出了自己的方法——PatchInstruct,該方法使LLMs能夠做出精確且有效的預測。
English
Recent advances in Large Language Models (LLMs) have demonstrated new possibilities for accurate and efficient time series analysis, but prior work often required heavy fine-tuning and/or ignored inter-series correlations. In this work, we explore simple and flexible prompt-based strategies that enable LLMs to perform time series forecasting without extensive retraining or the use of a complex external architecture. Through the exploration of specialized prompting methods that leverage time series decomposition, patch-based tokenization, and similarity-based neighbor augmentation, we find that it is possible to enhance LLM forecasting quality while maintaining simplicity and requiring minimal preprocessing of data. To this end, we propose our own method, PatchInstruct, which enables LLMs to make precise and effective predictions.
PDF22June 17, 2025