ChatPaper.aiChatPaper

通过基于片段的提示与分解,利用大语言模型进行时间序列预测

Forecasting Time Series with LLMs via Patch-Based Prompting and Decomposition

June 15, 2025
作者: Mayank Bumb, Anshul Vemulapalli, Sri Harsha Vardhan Prasad Jella, Anish Gupta, An La, Ryan A. Rossi, Hongjie Chen, Franck Dernoncourt, Nesreen K. Ahmed, Yu Wang
cs.AI

摘要

近期,大型语言模型(LLMs)的进展为精确高效的时间序列分析开辟了新途径,但以往研究往往需要大量微调,且忽视了序列间的关联性。本研究探索了简单灵活的基于提示的策略,使LLMs无需大量再训练或复杂外部架构即可执行时间序列预测。通过深入研究利用时间序列分解、基于分块的标记化及相似性邻居增强等专门提示方法,我们发现,在保持简洁性并仅需最少数据预处理的前提下,提升LLM预测质量是可行的。为此,我们提出了PatchInstruct方法,该方法使LLMs能够做出精准有效的预测。
English
Recent advances in Large Language Models (LLMs) have demonstrated new possibilities for accurate and efficient time series analysis, but prior work often required heavy fine-tuning and/or ignored inter-series correlations. In this work, we explore simple and flexible prompt-based strategies that enable LLMs to perform time series forecasting without extensive retraining or the use of a complex external architecture. Through the exploration of specialized prompting methods that leverage time series decomposition, patch-based tokenization, and similarity-based neighbor augmentation, we find that it is possible to enhance LLM forecasting quality while maintaining simplicity and requiring minimal preprocessing of data. To this end, we propose our own method, PatchInstruct, which enables LLMs to make precise and effective predictions.
PDF22June 17, 2025