Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting (2205.14415v4)

Published 28 May 2022 in cs.LG and eess.SP

Abstract: Transformers have shown great power in time series forecasting due to their global-range modeling ability. However, their performance can degenerate terribly on non-stationary real-world data in which the joint distribution changes over time. Previous studies primarily adopt stationarization to attenuate the non-stationarity of original series for better predictability. But the stationarized series deprived of inherent non-stationarity can be less instructive for real-world bursty events forecasting. This problem, termed over-stationarization in this paper, leads Transformers to generate indistinguishable temporal attentions for different series and impedes the predictive capability of deep models. To tackle the dilemma between series predictability and model capability, we propose Non-stationary Transformers as a generic framework with two interdependent modules: Series Stationarization and De-stationary Attention. Concretely, Series Stationarization unifies the statistics of each input and converts the output with restored statistics for better predictability. To address the over-stationarization problem, De-stationary Attention is devised to recover the intrinsic non-stationary information into temporal dependencies by approximating distinguishable attentions learned from raw series. Our Non-stationary Transformers framework consistently boosts mainstream Transformers by a large margin, which reduces MSE by 49.43% on Transformer, 47.34% on Informer, and 46.89% on Reformer, making them the state-of-the-art in time series forecasting. Code is available at this repository: https://github.com/thuml/Nonstationary_Transformers.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. ECL load. https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014.
  2. Illness Dataset. https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html.
  3. Traffic Dataset. http://pems.dot.ca.gov/.
  4. Weather Dataset. https://www.bgc-jena.mpg.de/wetter/.
  5. Invariance principle meets information bottleneck for out-of-distribution generalization. NeurIPS, 2021.
  6. O. Anderson and M. Kendall. Time-series. 2nd edn. J. R. Stat. Soc. (Series D), 1976.
  7. Time series analysis, forecasting and control. 1970.
  8. Some recent advances in forecasting and control. J. R. Stat. Soc. (Series-C), 1968.
  9. N-hits: Neural hierarchical interpolation for time series forecasting. arXiv preprint arXiv:2201.12886, 2022.
  10. Decision transformer: Reinforcement learning via sequence modeling. NeurIPS, 2021.
  11. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
  12. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
  13. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  14. Efficient tests for an autoregressive unit root. Econometrica, 1996.
  15. Combining recurrent, convolutional, and continuous-time models with linear state-space layers. NeurIPS, 2021.
  16. Forecasting: principles and practice. OTexts, 2018.
  17. Reversible instance normalization for accurate time-series forecasting against distribution shift. In ICLR, 2022.
  18. Adam: A method for stochastic optimization. In ICLR, 2015.
  19. Reformer: The efficient transformer. In ICLR, 2020.
  20. Modeling long-and short-term temporal patterns with deep neural networks. In SIGIR, 2018.
  21. Deeper, broader and artier domain generalization. In ICCV, 2017.
  22. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In NeurIPS, 2019.
  23. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In ICLR, 2021.
  24. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021.
  25. Deep factors with gaussian processes for forecasting. arXiv preprint arXiv:1812.00098, 2018.
  26. Adaptive normalization: A novel data normalization approach for non-stationary time series. In IJCNN, 2010.
  27. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. ICLR, 2019.
  28. A survey on transfer learning. TKDE, 2009.
  29. Deep adaptive input normalization for time series forecasting. TNNLS, 2019.
  30. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
  31. Deep state space models for time series forecasting. In NeurIPS, 2018.
  32. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast., 2020.
  33. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
  34. Attention is all you need. In NeurIPS, 2017.
  35. A multi-horizon quantile recurrent forecaster. NeurIPS, 2017.
  36. Etsformer: Exponential smoothing transformers for time-series forecasting. arXiv preprint arXiv:1406.1078, 2022.
  37. Autoformer: Decomposition transformers with Auto-Correlation for long-term series forecasting. In NeurIPS, 2021.
  38. Long-term forecasting using tensor-train rnns. arXiv preprint arXiv:1711.00073, 2017.
  39. Informer: Beyond efficient transformer for long sequence time-series forecasting. In AAAI, 2021.
  40. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yong Liu (721 papers)
  2. Haixu Wu (26 papers)
  3. Jianmin Wang (119 papers)
  4. Mingsheng Long (110 papers)
Citations (282)

Summary

Overview of "Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting"

This paper explores the challenges of employing Transformers in time series forecasting, particularly when dealing with non-stationary data. Traditional approaches have focused on stationarization techniques to make time series data more predictable. However, over-stationarization can suppress inherent non-stationarity, reducing the model's ability to predict bursty or unexpected events. Addressing these issues, the authors propose a novel framework called Non-stationary Transformers, which includes two primary modules: Series Stationarization and De-stationary Attention.

Key Contributions

  • Series Stationarization: This module applies a simple normalization strategy to align the statistics of each time series, thus aiding predictability. It normalizes input data, then restores the original statistics post-prediction to mitigate the effects of over-stationarization.
  • De-stationary Attention: This mechanism recovers intrinsic non-stationary information by approximating the attention distributions learned from the original data, allowing the model to capture distinct temporal dependencies.

Experimental Results

The Non-stationary Transformers framework demonstrates significant improvements over baseline Transformer models, achieving state-of-the-art performance on several benchmarks, including Electricity, ETT, Exchange, ILI, Traffic, and Weather datasets. On average, the framework reduces MSE by approximately 49.43% on Transformer, with similar improvements seen on variants like Informer, Reformer, and Autoformer.

Practical Implications

The proposed framework offers a robust method to enhance the predictive capabilities of Transformer-based models in real-world non-stationary time series data. This has potential applications in domains such as weather forecasting, energy consumption planning, and financial risk assessment, where capturing non-stationary behaviors is crucial.

Theoretical Implications

From a theoretical perspective, this work suggests that a balance must be struck between stationarization for predictability and maintaining non-stationary characteristics for model capability. It highlights the importance of integrating non-stationary information directly into model architectures.

Future Directions

Future research could explore generalizing these concepts beyond Transformer-based models or integrating alternative stationarization techniques. There's also potential in refining De-stationary Attention mechanisms for more complex attention models.

Overall, this paper makes a compelling case for revisiting how non-stationarity is treated in time series forecasting, advocating for a nuanced approach that leverages both predictability and meaningful temporal dependencies.

Youtube Logo Streamline Icon: https://streamlinehq.com