Efficient Interpretable Nonlinear Modeling for Multiple Time Series (2309.17154v1)
Abstract: Predictive linear and nonlinear models based on kernel machines or deep neural networks have been used to discover dependencies among time series. This paper proposes an efficient nonlinear modeling approach for multiple time series, with a complexity comparable to linear vector autoregressive (VAR) models while still incorporating nonlinear interactions among different time-series variables. The modeling assumption is that the set of time series is generated in two steps: first, a linear VAR process in a latent space, and second, a set of invertible and Lipschitz continuous nonlinear mappings that are applied per sensor, that is, a component-wise mapping from each latent variable to a variable in the measurement space. The VAR coefficient identification provides a topology representation of the dependencies among the aforementioned variables. The proposed approach models each component-wise nonlinearity using an invertible neural network and imposes sparsity on the VAR coefficients to reflect the parsimonious dependencies usually found in real applications. To efficiently solve the formulated optimization problems, a custom algorithm is devised combining proximal gradient descent, stochastic primal-dual updates, and projection to enforce the corresponding constraints. Experimental results on both synthetic and real data sets show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner while also improving the time-series prediction, as compared to the current state-of-the-art methods.
- L. M. Lopez-Ramos, K. Roy, and B. Beferull-Lozano, “Explainable nonlinear modelling of multiple time series with invertible neural networks,” 2021.
- K. Roy, L. M. Lopez-Ramos, and B. Beferull-Lozano, “Joint learning of topology and invertible nonlinearities from multiple time series,” in 2022 2nd International Seminar on Machine Learning, Optimization, and Data Science (ISMODE). IEEE, 2022, pp. 483–488.
- G. B. Giannakis, Y. Shen, and G. V. Karanikolas, “Topology identification and learning over graphs: Accounting for nonlinearities and dynamics,” Proceedings of the IEEE, vol. 106, no. 5, pp. 787–807, 2018.
- X. Dong, D. Thanou, M. Rabbat, and P. Frossard, “Learning graphs from data: A signal representation perspective,” IEEE Signal Processing Magazine, vol. 36, no. 3, p. 44–63, May 2019. [Online]. Available: http://dx.doi.org/10.1109/MSP.2018.2887284
- B. Zaman, L. M. Lopez-Ramos, D. Romero, and B. Beferull-Lozano, “Online topology identification from vector autoregressive time series,” IEEE Transactions on Signal Processing, 2020.
- L. M. Lopez-Ramos, D. Romero, B. Zaman, and B. Beferull-Lozano, “Dynamic network identification from non-stationary vector autoregressive time series,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2018, pp. 773–777.
- A. Chatterjee, R. J. Shah, and S. Sen, “Pattern matching based algorithms for graph compression,” in 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), 2018, pp. 93–97.
- C. W. J. Granger, “Investigating causal relations by econometric models and cross-spectral methods,” Econometrica, vol. 37, no. 3, pp. 424–438, 1969. [Online]. Available: http://www.jstor.org/stable/1912791
- B. Zaman, L. Lopez-Ramos, D. Romero, and B. Beferull-Lozano, “Online topology identification from vector autoregressive time series,” IEEE Transactions on Signal Processing, vol. PP, pp. 1–1, 12 2020.
- J. Lin and G. Michailidis, “Regularized estimation and testing for high-dimensional multi-block vector-autoregressive models,” Journal of Machine Learning Research, vol. 18, 08 2017.
- A. Tank, I. Covert, N. Foti, A. Shojaie, and E. B. Fox, “Neural granger causality,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1–1, mar 2021.
- R. Goebel, A. Roebroeck, D. Kim, and E. Formisano, “Investigating directed cortical interactions in time-resolved fmri data using vector autoregressive modeling and granger causality mapping,” Magnetic resonance imaging, vol. 21, pp. 1251–61, 01 2004.
- G. Giannakis, Y. Shen, and G. Karanikolas, “Topology identification and learning over graphs: Accounting for nonlinearities and dynamics,” Proceedings of the IEEE, vol. 106, pp. 787–807, 05 2018.
- V. N. Ioannidis, Y. Shen, and G. B. Giannakis, “Semi-blind inference of topologies and dynamical processes over dynamic graphs,” IEEE Transactions on Signal Processing, vol. 67, no. 9, pp. 2263–2274, 2019.
- A. Tank, I. Covert, N. Foti, A. Shojaie, and E. B. Fox, “Neural granger causality,” IEEE Transactions on Pattern Analysis and Machine Intelligence, p. 1–1, 2021. [Online]. Available: http://dx.doi.org/10.1109/TPAMI.2021.3065601
- D. Marinazzo, W. Liao, H. Chen, and S. Stramaglia, “Nonlinear connectivity by granger causality,” NeuroImage, vol. 58, pp. 330–8, 09 2011.
- K. Stephan, L. Kasper, L. Harrison, J. Daunizeau, H. Den Ouden, M. Breakspear, and K. Friston, “Nonlinear dynamic causal models for fmri,” NeuroImage, vol. 42, pp. 649–62, 05 2008.
- Y. Shen and G. B. Giannakis, “Online identification of directional graph topologies capturing dynamic and nonlinear dependencies,” in 2018 IEEE Data Science Workshop (DSW), 2018, pp. 195–199.
- R. Money, J. Krishnan, and B. Beferull-Lozano, “Online non-linear topology identification from graph-connected time series,” arXiv preprint arXiv:2104.00030, 2021.
- ——, “Online non-linear topology identification from graph-connected time series,” in 2021 IEEE Data Science and Learning Workshop (DSLW), 2021, pp. 1–6.
- Y. Shen, G. B. Giannakis, and B. Baingana, “Nonlinear structural vector autoregressive models with application to directed brain networks,” IEEE Transactions on Signal Processing, vol. 67, no. 20, pp. 5325–5339, 2019.
- B. Bussmann, J. Nys, and S. Latré, “Neural additive vector autoregression models for causal discovery in time series,” in Discovery Science: 24th International Conference, DS 2021, Halifax, NS, Canada, October 11–13, 2021, Proceedings 24. Springer, 2021, pp. 446–460.
- H. Elshoush, B. Al-Tayeb, and K. Obeid, “Enhanced serpent algorithm using lorenz 96 chaos-based block key generation and parallel computing for rgb image encryption,” PeerJ Computer Science, vol. 7, p. e812, 12 2021.
- A. Lozano, N. Abe, Y. Liu, and S. Rosset, “Grouped graphical granger modeling for gene expression regulatory network discovery,” Bioinformatics (Oxford, England), vol. 25, pp. i110–8, 07 2009.
- I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, vol. 57, no. 11, pp. 1413–1457, 2004.
- M. Blondel, A. Fujino, and N. Ueda, “Large-scale multiclass support vector machine training via euclidean projection onto the simplex,” in 2014 22nd International Conference on Pattern Recognition. IEEE, 2014, pp. 1289–1294.
- S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, pp. 1–122, 01 2011.
- Kevin Roy (2 papers)
- Luis Miguel Lopez-Ramos (5 papers)
- Baltasar Beferull-Lozano (23 papers)