Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GSP-KalmanNet: Tracking Graph Signals via Neural-Aided Kalman Filtering (2311.16602v1)

Published 28 Nov 2023 in eess.SP, cs.AI, and cs.LG

Abstract: Dynamic systems of graph signals are encountered in various applications, including social networks, power grids, and transportation. While such systems can often be described as state space (SS) models, tracking graph signals via conventional tools based on the Kalman filter (KF) and its variants is typically challenging. This is due to the nonlinearity, high dimensionality, irregularity of the domain, and complex modeling associated with real-world dynamic systems of graph signals. In this work, we study the tracking of graph signals using a hybrid model-based/data-driven approach. We develop the GSP-KalmanNet, which tracks the hidden graphical states from the graphical measurements by jointly leveraging graph signal processing (GSP) tools and deep learning (DL) techniques. The derivations of the GSP-KalmanNet are based on extending the KF to exploit the inherent graph structure via graph frequency domain filtering, which considerably simplifies the computational complexity entailed in processing high-dimensional signals and increases the robustness to small topology changes. Then, we use data to learn the Kalman gain following the recently proposed KalmanNet framework, which copes with partial and approximated modeling, without forcing a specific model over the noise statistics. Our empirical results demonstrate that the proposed GSP-KalmanNet achieves enhanced accuracy and run time performance as well as improved robustness to model misspecifications compared with both model-based and data-driven benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (68)
  1. G. Sagi, N. Shlezinger, and T. Routtenberg, “Extended Kalman filter for graph signals in nonlinear dynamic systems,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023.
  2. A. Ortega, P. Frossard, J. Kovačević, J. M. F. Moura, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” Proc. IEEE, vol. 106, no. 5, pp. 808–828, May 2018.
  3. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, May 2013.
  4. R. Ramakrishna and A. Scaglione, “Grid-graph signal processing (grid-GSP): A graph signal processing framework for the power grid,” IEEE Trans. Signal Process., vol. 69, pp. 2725–2739, 2021.
  5. Y. Zhao, J. Chen, and H. V. Poor, “A learning-to-infer method for real-time power grid multi-line outage identification,” IEEE Trans. Smart Grid, vol. 11, no. 1, pp. 555–564, 2019.
  6. G. B. Giannakis, V. Kekatos, N. Gatsis, S.-J. Kim, H. Zhu, and B. F. Wollenberg, “Monitoring and optimization for power grids: A signal processing perspective,” IEEE Signal Process. Mag., vol. 30, no. 5, pp. 107–128, 2013.
  7. R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.
  8. R. E. Larson, R. M. Dressler, and R. S. Ratner, “Application of the extended Kalman filter to ballistic trajectory estimation.” Stanford Research Inst Menlo Park CA, Tech. Rep., 1967.
  9. E. A. Wan and R. Van Der Merwe, “The unscented Kalman filter,” Kalman filtering and neural networks, pp. 221–280, 2001.
  10. J. Zhao, M. Netto, and L. Mili, “A robust iterated extended Kalman filter for power system dynamic state estimation,” IEEE Trans. Power Syst., vol. 32, no. 4, pp. 3205–3216, 2016.
  11. C. Carquex, C. Rosenberg, and K. Bhattacharya, “State estimation in power distribution systems based on ensemble Kalman filtering,” IEEE Trans. Power Syst., vol. 33, no. 6, pp. 6600–6610, 2018.
  12. A. K. Singh and B. C. Pal, “Decentralized dynamic state estimation in power systems using unscented transformation,” IEEE Trans. Power Syst., vol. 29, no. 2, pp. 794–804, 2013.
  13. A. Anis, A. Gadde, and A. Ortega, “Towards a sampling theorem for signals on arbitrary graphs,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
  14. A. G. Marques, S. Segarra, G. Leus, and A. Ribeiro, “Sampling of graph signals with successive local aggregations,” IEEE Trans. Signal Process., vol. 64, no. 7, pp. 1832–1843, 2015.
  15. Y. Tanaka, “Spectral domain sampling of graph signals,” IEEE Trans. Signal Process., vol. 66, no. 14, pp. 3752–3767, 2018.
  16. T. Routtenberg, “Non-Bayesian estimation framework for signal recovery on graphs,” IEEE Trans. Signal Process., vol. 69, pp. 1169–1184, 2021.
  17. A. Kroizer, T. Routtenberg, and Y. C. Eldar, “Bayesian estimation of graph signals,” IEEE Trans. Signal Process., vol. 70, pp. 2207–2223, 2022.
  18. A. Amar and T. Routtenberg, “Widely-linear MMSE estimation of complex-valued graph signals,” IEEE Trans. Signal Process., 2023.
  19. G. Sagi and T. Routtenberg, “MAP estimation of graph signals,” arXiv preprint arXiv:2209.11638, 2022.
  20. P. Li, N. Shlezinger, H. Zhang, B. Wang, and Y. C. Eldar, “Graph signal compression by joint quantization and sampling,” IEEE Trans. Signal Process., vol. 70, pp. 4512–4527, 2022.
  21. H. E. Egilmez, E. Pavez, and A. Ortega, “Graph learning from data under Laplacian and structural constraints,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 6, pp. 825–841, 2017.
  22. L. Shi, “Kalman filtering over graphs: Theory and applications,” IEEE Trans. Automatic Control, vol. 54, no. 9, pp. 2230–2234, 2009.
  23. D. Romero, V. N. Ioannidis, and G. B. Giannakis, “Kernel-based reconstruction of space-time functions on dynamic graphs,” IEEE J. Sel. Topics Signal Process., vol. 11, no. 6, pp. 856–869, 2017.
  24. A. Soule, K. Salamatian, A. Nucci, and N. Taft, “Traffic matrix tracking using Kalman filters,” Performance Evaluation Review, vol. 33, pp. 24–31, 12 2005.
  25. E. Isufi, A. Loukas, N. Perraudin, and G. Leus, “Forecasting time series with VARMA recursions on graphs,” IEEE Trans. Signal Process., vol. 67, no. 18, pp. 4870–4885, 2019.
  26. E. Isufi, P. Banelli, P. D. Lorenzo, and G. Leus, “Observing and tracking bandlimited graph processes from sampled measurements,” Signal Processing, vol. 177, 2020.
  27. P. Di Lorenzo, P. Banelli, E. Isufi, S. Barbarossa, and G. Leus, “Adaptive graph signal processing: Algorithms and optimal sampling strategies,” IEEE Trans. Signal Process., vol. 66, no. 13, pp. 3584–3598, 2018.
  28. J. Gu, X. Yang, S. De Mello, and J. Kautz, “Dynamic facial analysis: From Bayesian filtering to recurrent neural network,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1548–1557.
  29. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  30. S. S. Rangapuram, M. W. Seeger, J. Gasthaus, L. Stella, Y. Wang, and T. Januschowski, “Deep state space models for time series forecasting,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  31. B. Millidge, A. Tschantz, A. Seth, and C. Buckley, “Neural Kalman filtering,” arXiv preprint arXiv:2102.10021, 2021.
  32. S. Jouaber, S. Bonnabel, S. Velasco-Forero, and M. Pilte, “NNAKF: A neural network adapted Kalman filter for target tracking,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 4075–4079.
  33. P. Becker, H. Pandya, G. Gebhardt, C. Zhao, C. J. Taylor, and G. Neumann, “Recurrent Kalman networks: Factorized inference in high-dimensional deep feature spaces,” in International Conference on Machine Learning.   PMLR, 2019, pp. 544–552.
  34. A. Klushyn, R. Kurle, M. Soelch, B. Cseke, and P. van der Smagt, “Latent matters: Learning deep state-space models,” Advances in Neural Information Processing Systems, vol. 34, pp. 10 234–10 245, 2021.
  35. S. R. Jondhale and R. S. Deshpande, “Kalman filtering framework-based real time target tracking in wireless sensor networks using generalized regression neural networks,” IEEE Sensors Journal, vol. 19, no. 1, pp. 224–233, 2018.
  36. W. Jiang and J. Luo, “Graph neural network for traffic forecasting: A survey,” Expert Systems with Applications, vol. 207, p. 117921, 2022.
  37. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI open, vol. 1, pp. 57–81, 2020.
  38. F. Gama, A. G. Marques, G. Leus, and A. Ribeiro, “Convolutional neural network architectures for signals supported on graphs,” IEEE Trans. Signal Process., vol. 67, no. 4, pp. 1034–1049, 2019.
  39. A. Parada-Mayorga, L. Ruiz, and A. Ribeiro, “Graphon pooling in graph neural networks,” in European Signal Processing Conference (EUSIPCO), 2021, pp. 860–864.
  40. A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. Schardl, and C. Leiserson, “EvolveGCN: Evolving graph convolutional networks for dynamic graphs,” in AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 5363–5370.
  41. J. Skarding, B. Gabrys, and K. Musial, “Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey,” IEEE Access, vol. 9, pp. 79 143–79 168, 2021.
  42. H. He, C.-K. Wen, S. Jin, and G. Y. Li, “Model-driven deep learning for mimo detection,” IEEE Trans. Signal Process., vol. 68, pp. 1702–1715, 2020.
  43. H. K. Aggarwal, M. P. Mani, and M. Jacob, “MoDL: Model-based deep learning architecture for inverse problems,” IEEE Trans. medical imaging, vol. 38, no. 2, pp. 394–405, 2018.
  44. N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning: On the intersection of deep learning and optimization,” IEEE Access, vol. 10, pp. 115 384–115 398, 2022.
  45. N. Shlezinger and Y. C. Eldar, “Model-based deep learning,” Foundations and Trends® in Signal Processing, vol. 17, no. 4, pp. 291–416, 2023.
  46. M. Nagahama, K. Yamada, Y. Tanaka, S. H. Chan, and Y. C. Eldar, “Graph signal restoration using nested deep algorithm unrolling,” IEEE Trans. Signal Process., vol. 70, pp. 3296–3311, 2022.
  47. G. Revach, N. Shlezinger, X. Ni, A. L. Escoriza, R. J. Van Sloun, and Y. C. Eldar, “KalmanNet: Neural network aided Kalman filtering for partially known dynamics,” IEEE Trans. Signal Process., vol. 70, pp. 1532–1547, 2022.
  48. G. Revach, X. Ni, N. Shlezinger, R. J. van Sloun, and Y. C. Eldar, “RTSNet: Learning to smooth in partially known state-space models,” arXiv preprint arXiv:2110.04717, 2021.
  49. I. Klein, G. Revach, N. Shlezinger, J. E. Mehr, R. J. G. van Sloun, and Y. C. Eldar, “Uncertainty in data-driven Kalman filtering for partially known state-space models,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.
  50. G. Revach, N. Shlezinger, T. Locher, X. Ni, R. J. van Sloun, and Y. C. Eldar, “Unsupervised learned Kalman filtering,” in European Signal Processing Conference (EUSIPCO), 2022, pp. 1571–1575.
  51. I. Buchnik, D. Steger, G. Revach, R. J. van Sloun, T. Routtenberg, and N. Shlezinger, “Latent-KalmanNet: Learned Kalman filtering for tracking from high-dimensional signals,” arXiv preprint arXiv:2304.07827, 2023.
  52. N. Shlezinger and T. Routtenberg, “Discriminative and generative learning for linear estimation of random signals [lecture notes],” IEEE Signal Process. Mag., vol. 40, no. 6, pp. 75–82, 2023.
  53. A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs,” IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1644–1656, Apr. 2013.
  54. A. Ortega, “Introduction to graph signal processing.”   Cambridge University Press, 2022, ch. 3.
  55. E. Drayer and T. Routtenberg, “Detection of false data injection attacks in smart grids based on graph signal processing,” IEEE Syst. J., vol. 14, no. 2, pp. 1886–1896, 2019.
  56. L. Dabush and T. Routtenberg, “Verifying the smoothness of graph signals: A graph signal processing approach,” arXiv preprint arXiv:2305.19618, 2023.
  57. N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based deep learning,” Proc. IEEE, vol. 111, no. 5, pp. 465–499, 2023.
  58. M. Gruber, “An approach to target tracking,” MIT Lexington Lincoln Lab, Tech. Rep., 1967.
  59. H. Coskun, F. Achilles, R. DiPietro, N. Navab, and F. Tombari, “Long short-term memory Kalman filters: Recurrent neural estimators for pose regularization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5524–5532.
  60. J. Liu, E. Isufi, and G. Leus, “Filter design for autoregressive moving average graph filters,” IEEE Trans. Signal Inf. Process. Netw., vol. 5, no. 1, pp. 47–60, 2019.
  61. D. I. Shuman, P. Vandergheynst, and P. Frossard, “Chebyshev polynomial approximation for distributed signal processing,” in Proc. of DCOSS, 2011.
  62. L. Dieci and T. Eirola, “On smooth decompositions of matrices,” SIAM J. on Matrix Anal. and Appl., vol. 20, no. 3, pp. 800–819, 1999.
  63. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
  64. P. J. Werbos, “Backpropagation through time: what it does and how to do it,” Proc. IEEE, vol. 78, no. 10, pp. 1550–1560, 1990.
  65. S. Kanna, D. H. Dini, Y. Xia, S. Y. Hui, and D. P. Mandic, “Distributed widely linear Kalman filtering for frequency estimation in power networks,” IEEE Trans. Signal Inf. Process. Netw., vol. 1, no. 1, pp. 45–57, 2015.
  66. M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, and Y. Gai, “Deep graph library: A graph-centric, highly-performant package for graph neural networks,” arXiv preprint arXiv:1909.01315, 2019.
  67. L. Dabush, A. Kroizer, and T. Routtenberg, “State estimation in partially observable power systems via graph signal processing tools,” Sensors, vol. 23, no. 3, p. 1387, 2023.
  68. M. Halihal, T. Routtenberg, and H. V. Poor, “Estimation of complex-valued laplacian matrices for topology identification in power systems,” arXiv preprint arXiv:2308.03392, 2023.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com