Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Mamba Foundation Model for Time Series Forecasting (2411.02941v1)

Published 5 Nov 2024 in cs.LG and cs.AI

Abstract: Time series foundation models have demonstrated strong performance in zero-shot learning, making them well-suited for predicting rapidly evolving patterns in real-world applications where relevant training data are scarce. However, most of these models rely on the Transformer architecture, which incurs quadratic complexity as input length increases. To address this, we introduce TSMamba, a linear-complexity foundation model for time series forecasting built on the Mamba architecture. The model captures temporal dependencies through both forward and backward Mamba encoders, achieving high prediction accuracy. To reduce reliance on large datasets and lower training costs, TSMamba employs a two-stage transfer learning process that leverages pretrained Mamba LLMs, allowing effective time series modeling with a moderate training set. In the first stage, the forward and backward backbones are optimized via patch-wise autoregressive prediction; in the second stage, the model trains a prediction head and refines other components for long-term forecasting. While the backbone assumes channel independence to manage varying channel numbers across datasets, a channel-wise compressed attention module is introduced to capture cross-channel dependencies during fine-tuning on specific multivariate datasets. Experiments show that TSMamba's zero-shot performance is comparable to state-of-the-art time series foundation models, despite using significantly less training data. It also achieves competitive or superior full-shot performance compared to task-specific prediction models. The code will be made publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Haoyu Ma (45 papers)
  2. Yushu Chen (7 papers)
  3. Wenlai Zhao (7 papers)
  4. Jinzhe Yang (2 papers)
  5. Yingsheng Ji (2 papers)
  6. Xinghua Xu (1 paper)
  7. Xiaozhu Liu (1 paper)
  8. Hao Jing (8 papers)
  9. Shengzhuo Liu (2 papers)
  10. Guangwen Yang (40 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com