Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from the perspective of partial differential equations (2402.16913v1)

Published 25 Feb 2024 in cs.LG

Abstract: Recent advancements in deep learning have led to the development of various models for long-term multivariate time-series forecasting (LMTF), many of which have shown promising results. Generally, the focus has been on historical-value-based models, which rely on past observations to predict future series. Notably, a new trend has emerged with time-index-based models, offering a more nuanced understanding of the continuous dynamics underlying time series. Unlike these two types of models that aggregate the information of spatial domains or temporal domains, in this paper, we consider multivariate time series as spatiotemporal data regularly sampled from a continuous dynamical system, which can be represented by partial differential equations (PDEs), with the spatial domain being fixed. Building on this perspective, we present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers, following the encoding-integration-decoding operations. Our extensive experimentation across seven diverse real-world LMTF datasets reveals that PDETime not only adapts effectively to the intrinsic spatiotemporal nature of the data but also sets new benchmarks, achieving state-of-the-art results

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Multivariate time series dataset for space weather data analytics. Scientific data, 7(1):227, 2020.
  2. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31):15344–15349, 2019.
  3. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018.
  4. Message passing neural pde solvers. In International Conference on Learning Representations, 2021.
  5. Crom: Continuous reduced-order modeling of pdes using implicit neural representations. arXiv preprint arXiv:2206.02607, 2022.
  6. Neural ordinary differential equations. Advances in neural information processing systems, 31, 2018.
  7. Forecasting natural gas consumption in istanbul using neural networks and multivariate time series methods. Turkish Journal of Electrical Engineering and Computer Sciences, 20(5):695–711, 2012.
  8. Hypertime: Implicit neural representation for time series. arXiv preprint arXiv:2208.05836, 2022.
  9. Girshick, R. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp.  1440–1448, 2015.
  10. Causal discovery from temporally aggregated time series. In Uncertainty in artificial intelligence: proceedings of the… conference. Conference on Uncertainty in Artificial Intelligence, volume 2017. NIH Public Access, 2017.
  11. Learning to optimize multigrid pde solvers. In International Conference on Machine Learning, pp.  2415–2423. PMLR, 2019.
  12. Learning a neural 3d texture space from 2d exemplars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  8356–8364, 2020.
  13. Learning neural pde solvers with convergence guarantees. arXiv preprint arXiv:1906.01200, 2019.
  14. Time-series anomaly detection with implicit neural representation. arXiv preprint arXiv:2201.11950, 2022.
  15. Nert: Implicit neural representations for general unsupervised turbulence mitigation. arXiv preprint arXiv:2308.00622, 2023.
  16. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  17. Neural operator: Learning maps between function spaces. arXiv preprint arXiv:2108.08481, 2021.
  18. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in neural information processing systems, 32, 2019.
  19. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926, 2017.
  20. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
  21. Revisiting long-term time series forecasting: An investigation on linear mapping. arXiv preprint arXiv:2305.10721, 2023.
  22. Scinet: Time series modeling and forecasting with sample convolution and interaction. Advances in Neural Information Processing Systems, 35:5816–5828, 2022a.
  23. Learning to infer implicit surfaces without 3d supervision. Advances in Neural Information Processing Systems, 32, 2019.
  24. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  2019–2028, 2020.
  25. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems, 35:9881–9893, 2022b.
  26. iTransformer: Inverted transformers are effective for time series forecasting. In International Conference on Learning Representations, 2024.
  27. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229, 2021.
  28. Funnel: automatic mining of spatially coevolving epidemics. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  105–114, 2014.
  29. Time series continuous modeling for imputation and forecasting with implicit neural representations. arXiv preprint arXiv:2306.05880, 2023.
  30. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations, 2023.
  31. Texture fields: Learning texture representations in function space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  4531–4540, 2019.
  32. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  33. Latent ordinary differential equations for irregularly-sampled time series. Advances in neural information processing systems, 32, 2019.
  34. Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33:7462–7473, 2020.
  35. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537–7547, 2020.
  36. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  37. Transformers in time series: A survey. In International Joint Conference on Artificial Intelligence(IJCAI), 2023.
  38. Learning deep time-index models for time series forecasting. In International Conference on Machine Learning, pp.  37217–37237. PMLR, 2023.
  39. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems, 34:22419–22430, 2021.
  40. Timesnet: Temporal 2d-variation modeling for general time series analysis. In International Conference on Learning Representations, 2023.
  41. Learning to accelerate partial differential equations via latent global evolution. Advances in Neural Information Processing Systems, 35:2240–2253, 2022.
  42. Continuous pde dynamics forecasting with implicit neural representations. arXiv preprint arXiv:2209.14855, 2022.
  43. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp.  11121–11128, 2023.
  44. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In International Conference on Learning Representations, 2023.
  45. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp.  11106–11115, 2021.
  46. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning, pp.  27268–27286. PMLR, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shiyi Qi (9 papers)
  2. Zenglin Xu (145 papers)
  3. Yiduo Li (7 papers)
  4. Liangjian Wen (56 papers)
  5. Qingsong Wen (139 papers)
  6. Qifan Wang (129 papers)
  7. Yuan Qi (85 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com