Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

D-PAD: Deep-Shallow Multi-Frequency Patterns Disentangling for Time Series Forecasting (2403.17814v1)

Published 26 Mar 2024 in cs.AI

Abstract: In time series forecasting, effectively disentangling intricate temporal patterns is crucial. While recent works endeavor to combine decomposition techniques with deep learning, multiple frequencies may still be mixed in the decomposed components, e.g., trend and seasonal. Furthermore, frequency domain analysis methods, e.g., Fourier and wavelet transforms, have limitations in resolution in the time domain and adaptability. In this paper, we propose D-PAD, a deep-shallow multi-frequency patterns disentangling neural network for time series forecasting. Specifically, a multi-component decomposing (MCD) block is introduced to decompose the series into components with different frequency ranges, corresponding to the "shallow" aspect. A decomposition-reconstruction-decomposition (D-R-D) module is proposed to progressively extract the information of frequencies mixed in the components, corresponding to the "deep" aspect. After that, an interaction and fusion (IF) module is used to further analyze the components. Extensive experiments on seven real-world datasets demonstrate that D-PAD achieves the state-of-the-art performance, outperforming the best baseline by an average of 9.48% and 7.15% in MSE and MAE, respectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. L. Chen, D. Chen, Z. Shang, B. Wu, C. Zheng, B. Wen, and W. Zhang, “Multi-scale adaptive graph neural network for multivariate time series forecasting,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 10, pp. 10 748–10 761, 2023.
  2. G. Lai, W.-C. Chang, Y. Yang, and H. Liu, “Modeling long-and short-term temporal patterns with deep neural networks,” in Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2018.
  3. S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y.-X. Wang, and X. Yan, “Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting,” in Proceedings of the Advances in Neural Information Processing Systems, 2019.
  4. H. Wu, J. Xu, J. Wang, and M. Long, “Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting,” in Proceedings of Advances in Neural Information Processing Systems, 2021.
  5. E. S. Gardner Jr, “Exponential smoothing: The state of the art,” Journal of Forecasting, vol. 4, no. 1, pp. 1–28, 1985.
  6. S. J. Taylor and B. Letham, “Forecasting at scale,” The American Statistician, vol. 72, no. 1, pp. 37–45, 2018.
  7. A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are transformers effective for time series forecasting?” in Proceedings of the AAAI Conference on Artificial Intelligence, 2023.
  8. T. Zhou, Z. Ma, Q. Wen, X. Wang, L. Sun, and R. Jin, “FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting,” in Proceedings of the International Conference on Machine Learning, 2022.
  9. Z. Cui, W. Chen, and Y. Chen, “Multi-scale convolutional neural networks for time series classification,” arXiv preprint arXiv:1603.06995, 2016.
  10. G. Woo, C. Liu, D. Sahoo, A. Kumar, and S. Hoi, “CoST: Contrastive learning of disentangled seasonal-trend representations for time series forecasting,” in Proceedings of the International Conference on Machine Learning, 2022.
  11. Z. Wang, X. Xu, W. Zhang, G. Trajcevski, T. Zhong, and F. Zhou, “Learning latent seasonal-trend representations for time series forecasting,” in Proceedings of the Advances in Neural Information Processing Systems, 2022.
  12. Z. Yang, W. Yan, X. Huang, and L. Mei, “Adaptive temporal-frequency network for time-series forecasting,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 4, pp. 1576–1587, 2020.
  13. L. Minhao, A. Zeng, L. Qiuxia, R. Gao, M. Li, J. Qin, and Q. Xu, “T-WaveNet: A Tree-Structured wavelet neural network for time series signal analysis,” in Proceedings of the International Conference on Learning Representations, 2021.
  14. I. Deznabi and M. Fiterau, “MultiWave: Multiresolution deep architectures through wavelet decomposition for multivariate time series prediction,” in Proceedings of the Conference on Health, Inference, and Learning, 2023.
  15. B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio, “N-BEATS: Neural basis expansion analysis for interpretable time series forecasting,” in Proceedings of the International Conference on Learning Representations, 2020.
  16. C. Challu, K. G. Olivares, B. N. Oreshkin, F. G. Ramirez, M. M. Canseco, and A. Dubrawski, “N-HiTS: Neural hierarchical interpolation for time series forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2023.
  17. W. Fan, S. Zheng, X. Yi, W. Cao, Y. Fu, J. Bian, and T.-Y. Liu, “DEPTS: Deep expansion learning for periodic time series forecasting,” in Proceedings of the International Conference on Learning Representations, 2022.
  18. N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis,” Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, vol. 454, no. 1971, pp. 903–995, 1998.
  19. T. Jiang, C. Zhou, and H. Zhang, “Time series forecasting with an EMD-LSSVM-PSO ensemble adaptive learning paradigm,” in Proceedings of the International Conference on Computational Intelligence and Intelligent Systems, 2018.
  20. S. Huang, J. Chang, Q. Huang, and Y. Chen, “Monthly streamflow prediction using modified EMD-based support vector machine,” Journal of Hydrology, vol. 511, pp. 764–775, 2014.
  21. T. Kim, J.-Y. Shin, S. Kim, and J.-H. Heo, “Identification of relationships between climate indices and long-term precipitation in South Korea using ensemble empirical mode decomposition,” Journal of Hydrology, vol. 557, pp. 726–739, 2018.
  22. X. Zhang, R. R. Chowdhury, J. Shang, R. Gupta, and D. Hong, “Towards diverse and coherent augmentation for time-series forecasting,” in Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2023.
  23. S. Velasco-Forero, R. Pagès, and J. Angulo, “Learnable empirical mode decomposition based on mathematical morphology,” SIAM Journal on Imaging Sciences, vol. 15, no. 1, pp. 23–44, 2022.
  24. D. Chen, L. Chen, Y. Zhang, B. Wen, and C. Yang, “A multiscale interactive recurrent network for time-series forecasting.” IEEE Transactions on Cybernetics, vol. 52, no. 9, pp. 8793–8803, 2022.
  25. A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “WaveNet: A generative model for raw audio,” p. 125, 2016.
  26. M. Liu, A. Zeng, M. Chen, Z. Xu, Q. Lai, L. Ma, and Q. Xu, “SCINet: Time series modeling and forecasting with sample convolution and interaction,” in Proceedings of the Advances in Neural Information Processing Systems, 2022.
  27. L. Chen, W. Chen, B. Wu, Y. Zhang, B. Wen, and C. Yang, “Learning from multiple time series: A deep disentangled approach to diversified time series forecasting,” arXiv preprint arXiv:2111.04942, 2021.
  28. H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond efficient transformer for long sequence time-series forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021.
  29. C. Torrence and G. P. Compo, “A practical guide to wavelet analysis,” Bulletin of the American Meteorological Society, vol. 79, no. 1, pp. 61–78, 1998.
  30. J. C. Nunes, O. Niang, Y. Bouaoune, E. Delechelle, and P. Bunel, “Texture analysis based on the bidimensional empirical mode decomposition with gray-level co-occurrence models,” in Proceedings of the International Symposium on Signal Processing and Its Applications, 2003.
  31. S. M. Bhuiyan, R. R. Adhami, and J. F. Khan, “Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation,” EURASIP Journal on Advances in Signal Processing, vol. 2008, pp. 1–18, 2008.
  32. J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image and Vision Computing, vol. 21, no. 12, pp. 1019–1026, 2003.
  33. S. D. El Hadji, R. Alexandre, and A.-O. Boudraa, “A PDE model for 2D intrinsic mode functions,” in Proceedings of the International Conference on Image Processing, 2009.
  34. G. Wang, X.-Y. Chen, F.-L. Qiao, Z. Wu, and N. E. Huang, “On intrinsic mode function,” Advances in Adaptive Data Analysis, vol. 2, no. 3, pp. 277–293, 2010.
  35. F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem, “Challenging common assumptions in the unsupervised learning of disentangled representations,” in Proceedings of the International Conference on Machine Learning, 2019.
  36. Z. Wu, S. Pan, G. Long, J. Jiang, and C. Zhang, “Graph wavenet for deep spatial-temporal graph modeling,” in Proceedings of the International Joint Conference on Artifical Intelligence, 2019.
  37. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017.
  38. L. Chen and H. Shi, “DexDeepFM: Ensemble diversity enhanced extreme deep factorization machine model,” ACM Transactions on Knowledge Discovery from Data, vol. 16, no. 5, pp. 1–17, 2022.
  39. Y. Nie, N. H. Nguyen, P. Sinthong, and J. Kalagnanam, “A time series is worth 64 words: Long-term forecasting with transformers,” in Proceedings of the International Conference on Learning Representations, 2022.
  40. T. Kim, J. Kim, Y. Tae, C. Park, J.-H. Choi, and J. Choo, “Reversible instance normalization for accurate time-series forecasting against distribution shift,” in Proceedings of the International Conference on Learning Representations, 2021.
  41. E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with Gumbel-Softmax,” in Proceedings of the International Conference on Learning Representations, 2017.
  42. L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE.” Journal of Machine Learning Research, vol. 9, no. 11, pp. 2579–2605, 2008.

Summary

We haven't generated a summary for this paper yet.