Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Normalising Flow-based Differentiable Particle Filters (2403.01499v1)

Published 3 Mar 2024 in cs.LG and eess.SP

Abstract: Recently, there has been a surge of interest in incorporating neural networks into particle filters, e.g. differentiable particle filters, to perform joint sequential state estimation and model learning for non-linear non-Gaussian state-space models in complex environments. Existing differentiable particle filters are mostly constructed with vanilla neural networks that do not allow density estimation. As a result, they are either restricted to a bootstrap particle filtering framework or employ predefined distribution families (e.g. Gaussian distributions), limiting their performance in more complex real-world scenarios. In this paper we present a differentiable particle filtering framework that uses (conditional) normalising flows to build its dynamic model, proposal distribution, and measurement model. This not only enables valid probability densities but also allows the proposed method to adaptively learn these modules in a flexible way, without being restricted to predefined distribution families. We derive the theoretical properties of the proposed filters and evaluate the proposed normalising flow-based differentiable particle filters' performance through a series of numerical experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (87)
  1. A. Doucet, A. M. Johansen et al., “A tutorial on particle filtering and smoothing: Fifteen years later,” Handb. Nonlinear Filter., vol. 12, no. 656-704, p. 3, 2009.
  2. N. Gordon, D. Salmond, and A. Smith, “Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” in IEE Proc. F (Radar and Signal Process.), vol. 140, 1993, pp. 107–113.
  3. P. M. Djuric, J. H. Kotecha, J. Zhang, Y. Huang, T. Ghirmai, M. F. Bugallo, and J. Miguez, “Particle filtering,” IEEE Signal Process. Mag., vol. 20, no. 5, pp. 19–38, 2003.
  4. P. Del Moral and L. Miclo, “Branching and interacting particle systems. approximations of Feynman-Kac formulae with applications to non-linear filtering,” Sémin. Probab. Strasbourg, vol. 34, pp. 1–145, 2000.
  5. D. Crisan and A. Doucet, “A survey of convergence results on particle filtering methods for practitioners,” IEEE Trans. Signal Process., vol. 50, no. 3, pp. 736–746, 2002.
  6. P. Del Moral, “Measure-valued processes and interacting particle systems. application to nonlinear filtering problems,” Ann. Appl. Probab., vol. 8, no. 2, pp. 438–495, 1998.
  7. V. Elvira, J. Míguez, and P. M. Djurić, “Adapting the number of particles in sequential Monte Carlo methods through an online scheme for convergence assessment,” IEEE Trans. Signal Process., vol. 65, no. 7, pp. 1781–1794, 2017.
  8. V. Elvira, J. Miguez, and P. M. Djurić, “On the performance of particle filters with adaptive number of particles,” Stat. Comput., vol. 31, pp. 1–18, 2021.
  9. A. Giremus, J.-Y. Tourneret, and V. Calmettes, “A particle filtering approach for joint detection/estimation of multipath effects on GPS measurements,” IEEE Trans. Signal Process., vol. 55, no. 4, pp. 1275–1285, 2007.
  10. T. Zhang, C. Xu, and M.-H. Yang, “Multi-task correlation particle filter for robust object tracking,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recogn. (CVPR), Honolulu, Hawaii, July 2017.
  11. D. Creal, “A survey of sequential monte carlo methods for economics and finance,” Econometric Rev., vol. 31, no. 3, pp. 245–296, 2012.
  12. X. Ma et al., “Discriminative particle filter reinforcement learning for complex partial observations,” in Proc. Int. Conf. Learn. Rep. (ICLR), New Orleans, USA, May 2019.
  13. M. K. Pitt and N. Shephard, “Filtering via simulation: Auxiliary particle filters,” J. Amer. Statist. Assoc., vol. 94, no. 446, pp. 590–599, 1999.
  14. V. Elvira, L. Martino, M. F. Bugallo, and P. M. Djurić, “Elucidating the auxiliary particle filter via multiple importance sampling,” IEEE Signal Process. Mag., vol. 36, no. 6, pp. 145–152, 2019.
  15. N. Branchini and V. Elvira, “Optimized auxiliary particle filters: adapting mixture proposals via convex optimization,” in Proc. Conf. Uncertain. Artif. Intell. (UAI), 2021, pp. 1289–1299.
  16. A. Doucet, N. de Freitas, K. Murphy, and S. Russell, “Rao-Blackwellised particle filtering for dynamic Bayesian networks,” in Proc. Conf. Uncertain. Artif. Intell. (UAI), Stanford, USA, 2000, pp. 176–183.
  17. N. De Freitas, “Rao-Blackwellised particle filtering for fault diagnosis,” in Proc. IEEE Aerosp. Conf., vol. 4, 2002, pp. 4–4.
  18. S. Godsill, “Particle filtering: the first 25 years and beyond,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), Brighton, UK, May 2019.
  19. R. Van Der Merwe, A. Doucet, N. De Freitas, and E. Wan, “The unscented particle filter,” Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Dec. 2000.
  20. S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,” Proc. IEEE, vol. 92, no. 3, pp. 401–422, 2004.
  21. N. Oudjane and C. Musso, “Progressive correction for regularized particle filters,” in Proc. IEEE Int. Conf. Inf. Fusion (FUSION), Paris, France, July 2000.
  22. C. Musso, N. Oudjane, and F. Le Gland, “Improving regularised particle filters,” Sequential Monte Carlo methods in practice, pp. 247–271, 2001.
  23. P. M. Djurić, T. Lu, and M. F. Bugallo, “Multiple particle filtering,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), Honolulu, USA, Apr. 2007.
  24. P. M. Djurić and M. F. Bugallo, “Particle filtering for high-dimensional systems,” in Proc. Comput. Adv. Multi-Sensor Adapt. Process. (CAMSAP), Saint Martin, France, Dec. 2013.
  25. J. H. Kotecha and P. M. Djurić, “Gaussian sum particle filtering for dynamic state-space models,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., Salt Lake City, USA, May 2001.
  26. ——, “Gaussian sum particle filtering,” IEEE Trans. Signal Process., vol. 51, no. 10, pp. 2602–2612, 2003.
  27. N. Kantas, A. Doucet, S. S. Singh, J. Maciejowski, and N. Chopin, “On particle methods for parameter estimation in state-space models,” Stat. Sci., vol. 30, no. 3, pp. 328–351, 2015.
  28. N. Kantas, A. Doucet, S. S. Singh, and J. M. Maciejowski, “An overview of sequential Monte Carlo methods for parameter estimation in general state-space models,” IFAC Proc. Vol., vol. 42, no. 10, pp. 774–785, 2009.
  29. M. Hürzeler and H. R. Künsch, “Approximating and maximising the likelihood for a general state-space model,” Sequential Monte Carlo methods in practice, pp. 159–175, 2001.
  30. E. L. Ionides, C. Bretó, and A. A. King, “Inference for nonlinear dynamical systems,” Proc. Natl. Acad. Sci., vol. 103, no. 49, pp. 18 438–18 443, 2006.
  31. S. Malik and M. K. Pitt, “Particle filters for continuous likelihood evaluation and maximisation,” J. Econometrics, vol. 165, no. 2, pp. 190–209, 2011.
  32. J. Olsson, O. Cappé, R. Douc, and É. Moulines, “Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models,” Bernoulli, vol. 14, no. 1, pp. 155–179, 2008.
  33. G. Poyiadjis, A. Doucet, and S. S. Singh, “Particle approximations of the score and observed information matrix in state space models with application to parameter estimation,” Biometrika, vol. 98, no. 1, pp. 65–80, 2011.
  34. C. Andrieu, A. Doucet, and R. Holenstein, “Particle markov chain monte carlo methods,” J. R. Stat. Soc. Ser. B. Stat. Methodol., vol. 72, no. 3, pp. 269–342, 2010.
  35. N. Chopin, P. E. Jacob, and O. Papaspiliopoulos, “SMC2: an efficient algorithm for sequential analysis of state space models,” J. R. Stat. Soc. Ser. B. Stat. Methodol., vol. 75, no. 3, pp. 397–426, 2013.
  36. S. Pérez-Vieites, I. P. Mariño, and J. Míguez, “Probabilistic scheme for joint parameter estimation and state prediction in complex dynamical systems,” Phys. Rev. E, vol. 98, no. 6, p. 063305, 2018.
  37. F. Lindsten, M. I. Jordan, and T. B. Schon, “Particle gibbs with ancestor sampling,” J. Mach. Learn. Res., vol. 15, pp. 2145–2184, 2014.
  38. X. Chen and Y. Li, “An overview of differentiable particle filters for data-adaptive sequential Bayesian inference,” arXiv preprint arXiv:2302.09639, 2023.
  39. R. Jonschkowski, D. Rastogi, and O. Brock, “Differentiable particle filters: end-to-end learning with algorithmic priors,” in Proc. Robot.: Sci. and Syst. (RSS), Pittsburgh, Pennsylvania, July 2018.
  40. P. Karkus, D. Hsu, and W. S. Lee, “Particle filter networks with application to visual localization,” in Proc. Conf. Robot Learn. (CoRL), Zurich, Switzerland, Oct 2018.
  41. A. Corenflos, J. Thornton, G. Deligiannidis, and A. Doucet, “Differentiable particle filtering via entropy-regularized optimal transport,” in Proc. Int. Conf. Mach. Learn. (ICML), July 2021.
  42. A. Ścibior and F. Wood, “Differentiable particle filtering without modifying the forward pass,” arXiv preprint arXiv:2106.10314, 2021.
  43. A. Kloss, G. Martius, and J. Bohg, “How to train your differentiable filter,” Auto. Robot., vol. 45, no. 4, pp. 561–578, 2021.
  44. M. Zhu, K. Murphy, and R. Jonschkowski, “Towards differentiable resampling,” arXiv preprint arXiv:2004.11938, 2020.
  45. T. A. Le, M. Igl, T. Rainforth, T. Jin, and F. Wood, “Auto-encoding sequential Monte Carlo,” in Proc. Int. Conf. Learn. Rep. (ICLR), Vancouver, Canada, Apr. 2018.
  46. C. Naesseth, S. Linderman, R. Ranganath, and D. Blei, “Variational sequential Monte Carlo,” in Proc. Int. Conf. Artif. Intel. and Stat. (AISTATS), Playa Blanca, Spain, Apr. 2018.
  47. C. J. Maddison et al., “Filtering variational objectives,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Long Beach, USA, Dec. 2017.
  48. P. Bickel, B. Li, and T. Bengtsson, “Sharp failure rates for the bootstrap particle filter in high dimensions,” Pushing the limits of contemporary statistics: Contributions in honor of Jayanta K. Ghosh, 2008.
  49. X. Chen, H. Wen, and Y. Li, “Differentiable particle filters through conditional normalizing flow,” in Proc. IEEE Int. Conf. Inf. Fusion (FUSION), Sun City, South Africa, Nov. 2021.
  50. X. Chen and Y. Li, “Conditional measurement density estimation in sequential Monte Carlo via normalizing flow,” in Proc. Euro. Sig. Process. Conf. (EUSIPCO), Belgrade, Serbia, Aug. 2022.
  51. E. L. IONIDES, A. BHADRA, Y. ATCHADÉ, and A. KING, “Iterated filtering,” Ann. of Statist., vol. 39, no. 3, pp. 1776–1802, 2011.
  52. Z. Zhao, B. Huang, and F. Liu, “Parameter estimation in batch process using EM algorithm with particle filter,” Comput. & Chem. Eng., vol. 57, pp. 159–172, 2013.
  53. A. Wills, T. B. Schön, and B. Ninness, “Parameter estimation for discrete-time nonlinear systems using EM,” IFAC Proc. Vol., vol. 41, no. 2, pp. 4012–4017, 2008.
  54. C. Andrieu, A. Doucet, and V. B. Tadic, “On-line parameter estimation in general state-space models,” in Proc. IEEE Conf. Dec. and Contr. (CDC), Seville, Spain, Dec. 2005.
  55. F. LeGland and L. Mével, “Recursive estimation in hidden markov models,” in Proc. IEEE Conf. Decis. Control (CDC), vol. 4.   IEEE, 1997, pp. 3468–3473.
  56. D. Crisan and J. MÍguez, “Nested particle filters for online parameter estimation in discrete-time state-space Markov models,” Bernoulli, vol. 24, no. 4A, pp. 3039–3086, 2018.
  57. S. Pérez-Vieites and J. Míguez, “Nested Gaussian filters for recursive Bayesian inference and nonlinear tracking in state space models,” Signal Proces., vol. 189, p. 108295, 2021.
  58. N. G. Polson, J. R. Stroud, and P. Müller, “Practical filtering with sequential parameter learning,” J. R. Stat. Soc. Ser. B. Stat. Methodol, vol. 70, no. 2, pp. 413–428, 2008.
  59. J. Fernández-Villaverde and J. F. Rubio-Ramírez, “Estimating macroeconomic models: A likelihood approach,” Rev. †Econ. Stud., vol. 74, no. 4, pp. 1059–1087, 2007.
  60. N. Chopin, “Central limit theorem for sequential Monte Carlo methods and its application to bayesian inference,” Ann. Statist., vol. 32, no. 6, pp. 2385–2411, 2004.
  61. C. Rosato, L. Devlin, V. Beraud, P. Horridge, T. B. Schön, and S. Maskell, “Efficient learning of the parameters of non-linear models using differentiable resampling in particle filters,” IEEE Trans. Signal Process. (TSP), vol. 70, pp. 3676–3692, 2022.
  62. X. Ma, P. Karkus, D. Hsu, and W. S. Lee, “Particle filter recurrent neural networks,” in Proc. AAAI Conf. Artif. Intell. (AAAI), New York, USA, Feb. 2020.
  63. M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Lake Tahoe, USA, Dec. 2013.
  64. J. Feydy et al., “Interpolating between optimal transport and mmd using Sinkhorn divergences,” in Proc. Int. Conf. Artif. Intell. Stat. (AISTAS), Naha, Japan, Apr. 2019.
  65. G. Peyré and M. Cuturi, “Computational optimal transport,” Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355–607, 2019.
  66. J. Lee et al., “Set transformer: A framework for attention-based permutation-invariant neural networks,” in Proc. Int. Conf. Mach. Learn. (ICML), Baltimore, USA, June 2019.
  67. A. Vaswani et al., “Attention is all you need,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Long Beach, USA, Dec. 2017.
  68. M. H. Dupty, Y. Dong, and W. S. Lee, “PF-GNN: Differentiable particle filtering based approximation of universal graph representations,” in Proc. Int. Conf. Learn. Rep. (ICLR), May 2021.
  69. R. Chen, H. Yin, Y. Jiao, G. Dissanayake, Y. Wang, and R. Xiong, “Deep samplable observation model for global localization and kidnapping,” IEEE Robot. Autom. Lett. (RAL), vol. 6, no. 2, pp. 2296–2303, 2021.
  70. A. Doucet, D. N. Freitas, and N. Gordon, “An introduction to sequential Monte Carlo methods,” in Sequential Monte Carlo methods in practice.   Springer, 2001, pp. 3–14.
  71. R. E. Kalman, “A new approach to linear filtering and prediction problems,” J .Basic Eng., vol. 82, no. 1, pp. 33–45, 1960.
  72. R. Douc and O. Cappé, “Comparison of resampling schemes for particle filtering,” in Proc. Int. Symp. Image and Signal Process. and Anal., Zagreb, Croatia, 2005.
  73. T. Li, M. Bolic, and P. M. Djuric, “Resampling methods for particle filtering: classification, implementation, and strategies,” IEEE Signal Process. Mag., vol. 32, no. 3, pp. 70–86, 2015.
  74. I. Kobyzev, S. Prince, and M. Brubaker, “Normalizing flows: An introduction and review of current methods,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 43, no. 11, pp. 3964–3979, 2020.
  75. G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan, “Normalizing flows for probabilistic modeling and inference,” J. Mach. Learn. Res., vol. 22, pp. 1–64, 2021.
  76. L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using Real NVP,” in Proc. Int. Conf. Learn. Represent. (ICLR), Toulon, France, Apr. 2017.
  77. M. Balunovic, A. Ruoss, and M. Vechev, “Fair normalizing flows,” in Proc. Int. Conf. Learn. Represent. (ICLR), Apr. 2022.
  78. D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Montreal, Canada, Dec. 2018.
  79. D. Rezende and S. Mohamed, “Variational inference with normalizing flows,” in Proc. Int. Conf. Mach. Learn. (ICML), Lille, France, July 2015.
  80. M. Karami, D. Schuurmans, J. Sohl-Dickstein, L. Dinh, and D. Duckworth, “Invertible convolutional flow,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Vancouver, Canada, Dec. 2019.
  81. C. Huang, D. Krueger, A. Lacoste, and A. Courville, “Neural autoregressive flows,” in Proc. Int. Conf. Mach. Learn. (ICML), Stockholm, Sweden, Aug. 2018.
  82. C. Winkler, D. Worrall, E. Hoogeboom, and M. Welling, “Learning likelihoods with conditional normalizing flows,” arXiv preprint arXiv:1912.00042, 2019.
  83. Y. Lu and B. Huang, “Structured output learning with conditional generative flows,” in Proc. AAAI Conf. Artif. Intell. (AAAI), New York, USA, Feb. 2020.
  84. T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel, “Backprop KF: Learning discriminative deterministic state estimators,” in Proc. Adv. Neur. Inf. Process. Sys. (NeurIPS), Barcelona, Spain, Dec. 2016.
  85. C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Küttler, A. Lefrancq, S. Green, V. Valdés, A. Sadik et al., “Deepmind lab,” arXiv preprint arXiv:1612.03801, 2016.
  86. K. D. P. and B. J., “Adam: A method for stochastic optimization,” in Proc. Int. Conf. on Learn. Represent. (ICLR), San Diego, USA, May 2015.
  87. N. Fournier and A. Guillin, “On the rate of convergence in Wasserstein distance of the empirical measure,” §Probab. Theory Related Fields, vol. 162, no. 3, pp. 707–738, 2015.
Citations (1)

Summary

We haven't generated a summary for this paper yet.