Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting semi-supervised training objectives for differentiable particle filters (2405.01251v1)

Published 2 May 2024 in cs.LG and stat.ML

Abstract: Differentiable particle filters combine the flexibility of neural networks with the probabilistic nature of sequential Monte Carlo methods. However, traditional approaches rely on the availability of labelled data, i.e., the ground truth latent state information, which is often difficult to obtain in real-world applications. This paper compares the effectiveness of two semi-supervised training objectives for differentiable particle filters. We present results in two simulated environments where labelled data are scarce.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. N. Gordon, D. Salmond, and A. Smith, “Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” in IEE Proc. F (Radar and Signal Process.), vol. 140, 1993, pp. 107–113.
  2. A. Doucet, S. Godsill, and C. Andrieu, “On sequential Monte Carlo sampling methods for Bayesian filtering,” Stat. Comput, vol. 10, no. 3, pp. 197–208, 2000.
  3. K. Dai, D. Wang, H. Lu, C. Sun, and J. Li, “Visual tracking via adaptive spatially-regularized correlation filters,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 4670–4679.
  4. C. Palmier, K. Dahia, N. Merlinge, P. Del Moral, D. Laneuville, and C. Musso, “Adaptive approximate Bayesian computational particle filters for underwater terrain aided navigation,” in Proc. IEEE Int. Conf. Inf. Fusion. (FUSION), 2019, pp. 1–8.
  5. X. Qian, A. Brutti, M. Omologo, and A. Cavallaro, “3d audio-visual speaker tracking with an adaptive particle filter,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process (ICASSP), 2017, pp. 2896–2900.
  6. A. Doucet, N. De Freitas, and N. Gordon, “An introduction to sequential Monte Carlo methods,” Sequential Monte Carlo methods in practice, pp. 3–14, 2001.
  7. N. Kantas, A. Doucet, S. S. Singh, J. Maciejowski, and N. Chopin, “On particle methods for parameter estimation in state-space models,” Stat. Sci., 2015.
  8. H. Coskun, F. Achilles, R. DiPietro, N. Navab, and F. Tombari, “Long short-term memory kalman filters: Recurrent neural estimators for pose regularization,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5524–5532.
  9. R. Jonschkowski, D. Rastogi, and O. Brock, “Differentiable particle filters: end-to-end learning with algorithmic priors,” in Proc. Robot.: Sci. and Syst. (RSS), Pittsburgh, Pennsylvania, July 2018.
  10. P. Karkus, D. Hsu, and W. S. Lee, “Particle filter networks with application to visual localization,” in Proc. Conf. Robot. Learn. (CoRL), Zürich, Switzerland, 2018, pp. 169–178.
  11. A. Corenflos, J. Thornton, G. Deligiannidis, and A. Doucet, “Differentiable particle filtering via entropy-regularized optimal transport,” in Proc. Int. Conf. Mach. Learn. (ICML), 2021, pp. 2100–2111.
  12. X. Chen and Y. Li, “An overview of differentiable particle filters for data-adaptive sequential Bayesian inference,” Found. Data Sci., 2023.
  13. ——, “Conditional measurement density estimation in sequential Monte Carlo via normalizing flow,” in Proc. Euro. Sig. Process. Conf. (EUSIPCO), 2022, pp. 782–786.
  14. D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring,” in Proc. Int. Conf. Learn. Rep. (ICLR), Addis Ababa, Ethiopia, Apr. 2020.
  15. H. Wen, X. Chen, G. Papagiannis, C. Hu, and Y. Li, “End-to-end semi-supervised learning for differentiable particle filters,” in Proc. IEEE Int. Conf. Robot. Automat., (ICRA), Xi’an, China, May 2021.
  16. C. Andrieu, A. Doucet, and V. B. Tadic, “On-line parameter estimation in general state-space models,” in Proc. IEEE Conf. Dec. and Contr. (CDC), Seville, Spain, Dec. 2005.
  17. T. A. Le, M. Igl, T. Rainforth, T. Jin, and F. Wood, “Auto-encoding sequential Monte Carlo,” in Proc. Int. Conf. Learn. Rep. (ICLR), Vancouver, Canada, Apr. 2018.
  18. C. J. Maddison, J. Lawson, G. Tucker, N. Heess, M. Norouzi, A. Mnih, A. Doucet, and Y. Teh, “Filtering variational objectives,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 30, 2017.
  19. C. Naesseth, S. Linderman, R. Ranganath, and D. Blei, “Variational sequential Monte Carlo,” in Proc. Int. Conf. Artif. Intel. and Stat. (AISTATS), Playa Blanca, Spain, Apr. 2018.
  20. J. Li, X. Chen, and Y. Li, “Learning differentiable particle filter on the fly,” in Proc. Asilomar Conf. Sig., Sys. and Comp., Asilomar, USA, Nov. 2023.
  21. R. Douc and O. Cappé, “Comparison of resampling schemes for particle filtering,” in Proc. Int. Symp. Image and Signal Process. and Anal., Zagreb, Croatia, 2005.
  22. V. Elvira, L. Martino, and C. P. Robert, “Rethinking the effective sample size,” Int. Stat. Rev., vol. 90, no. 3, pp. 525–550, 2022.
  23. A. Doucet, A. M. Johansen et al., “A tutorial on particle filtering and smoothing: Fifteen years later,” Handb. Nonlinear Filter., vol. 12, no. 656-704, p. 3, 2009.
  24. A. Mastrototaro and J. Olsson, “Online variational sequential monte carlo,” arXiv preprint arXiv:2312.12616, 2024.
  25. X. Chen, H. Wen, and Y. Li, “Differentiable particle filters through conditional normalizing flow,” in Proc. IEEE Int. Conf. Inf. Fusion. (FUSION), 2021, pp. 1–6.
  26. P. Del Moral and A. Doucet, “Particle methods: An introduction with applications,” in ESAIM: Proc., vol. 44, Jan 2014.
  27. N. Chopin, O. Papaspiliopoulos, N. Chopin, and O. Papaspiliopoulos, “Particle filtering,” An Introduction to Sequential Monte Carlo, pp. 129–165, 2020.
  28. B. Cox, S. Pérez-Vieites, N. Zilberstein, M. Sevilla, S. Segarra, and V. Elvira, “End-to-end learning of gaussian mixture proposals using differentiable particle filters and neural networks,” in IEEE Int. Conf. on Acoustics, Speech and Sig. Process. (ICASSP), 2024, pp. 9701–9705.
  29. C. Beattie et al., “Deepmind lab,” arXiv preprint arXiv:1612.03801, 2016.
  30. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. on Learn. Represent. (ICLR), San Diego, USA, May 2015.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com