ChatPaper.aiChatPaper

VisionTS:视觉遮罩自编码器是免费午餐零样本时间序列预测器

VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

August 30, 2024
作者: Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun, Chenghao Liu
cs.AI

摘要

基础模型已成为时间序列预测(TSF)中一种有前途的方法。现有方法要么微调大型语言模型(LLMs),要么构建大规模时间序列数据集来开发TSF基础模型。然而,这些方法面临挑战,因为存在严重的跨领域差距或领域内异质性。在本文中,我们探索了一条新的道路,从丰富且高质量的自然图像构建TSF基础模型,基于图像与时间序列之间的内在相似性。为了弥合两个领域之间的差距,我们将TSF任务重新表述为图像重建任务,进一步由在ImageNet数据集上进行自监督预训练的视觉遮罩自编码器(MAE)进行处理。令人惊讶的是,在不需要在时间序列领域进行进一步适应的情况下,所提出的VisionTS在零-shot预测性能方面比现有的TSF基础模型表现更优异。通过最小的微调,VisionTS可以进一步改进预测并在大多数情况下实现最先进的性能。这些发现表明,视觉模型可能是时间序列预测的一种免费午餐,并突显了计算机视觉和TSF之间未来跨领域研究的潜力。我们的代码可在https://github.com/Keytoyze/VisionTS 上公开获取。
English
Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either fine-tune large language models (LLMs) or build large-scale time-series datasets to develop TSF foundation models. However, these methods face challenges due to the severe cross-domain gap or in-domain heterogeneity. In this paper, we explore a new road to building a TSF foundation model from rich and high-quality natural images, based on the intrinsic similarities between images and time series. To bridge the gap between the two domains, we reformulate the TSF task as an image reconstruction task, which is further processed by a visual masked autoencoder (MAE) self-supervised pre-trained on the ImageNet dataset. Surprisingly, without further adaptation in the time-series domain, the proposed VisionTS could achieve superior zero-shot forecasting performance compared to existing TSF foundation models. With minimal fine-tuning, VisionTS could further improve the forecasting and achieve state-of-the-art performance in most cases. These findings suggest that visual models could be a free lunch for TSF and highlight the potential for future cross-domain research between computer vision and TSF. Our code is publicly available at https://github.com/Keytoyze/VisionTS.

Summary

AI-Generated Summary

PDF402November 16, 2024