Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Mixing time of the conditional backward sampling particle filter (2312.17572v3)

Published 29 Dec 2023 in stat.CO and math.PR

Abstract: The conditional backward sampling particle filter (CBPF) is a powerful Markov chain Monte Carlo sampler for general state space hidden Markov model (HMM) smoothing. It was proposed as an improvement over the conditional particle filter (CPF), which is known to have an $O(T2)$ computational time complexity under a general `strong' mixing assumption, where $T$ is the time horizon. While there is empirical evidence of the superiority of the CBPF over the CPF in practice, this has never been theoretically quantified. We show that the CBPF has $O(T \log T)$ time complexity under strong mixing. In particular, the CBPF's mixing time is upper bounded by $O(\log T)$, for any sufficiently large number of particles $N$ that depends only on the mixing assumptions and not $T$. We also show that an $O(\log T)$ mixing time is optimal. To prove our main result, we introduce a novel coupling of two CBPFs, which employs a maximal coupling of two particle systems at each time instant. As the coupling is implementable, it thus has practical applications. We use it to construct unbiased, finite variance, estimates of functionals which have arbitrary dependence on the latent state's path, with a total expected cost of $O(T \log T)$. As the specific application to real-data analysis, we construct unbiased estimates of the HMM's score function, leading to stochastic gradient maximum likelihood estimation of a financial time-series model. Finally, we also investigate other couplings and show that some of these alternatives can have improved empirical behaviour.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol., 72(3):269–342, 2010.
  2. Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers. Bernoulli, 24(2):842–872, 2018.
  3. Inference in Hidden Markov Models. Springer Series in Statistics. Springer, 2005.
  4. State and parameter learning with PARIS particle Gibbs. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 3625–3675, 2023.
  5. N. Chopin and O. Papaspiliopoulos. An introduction to sequential Monte Carlo, volume 4 of Springer Series in Statistics. Springer, 2020.
  6. N. Chopin and S. S. Singh. On particle Gibbs sampling. Preprint arXiv:1304.1887v1, 2013.
  7. N. Chopin and S. S. Singh. On particle Gibbs sampling. Bernoulli, 21(3):1855–1883, 2015.
  8. H.-D. Dau and N. Chopin. On backward smoothing algorithms. Ann. Statist., 51(5):2145–2169, 2023.
  9. P. Del Moral. Feynman-Kac formulae: Genealogical and Interacting Particle Systems with Applications, volume 88. Springer, 2004.
  10. Forgetting the initial distribution for hidden Markov models. Stochastic Process. Appl., 119(4):1235–1256, 2009.
  11. J. Durbin and S. J. Koopman. Time series analysis by state space methods. Oxford University Press, New York, 2nd edition, 2012.
  12. A. Finke and A. H. Thiery. Conditional sequential Monte Carlo in high dimensions. Ann. Statist., 51(2):437–463, 2023.
  13. Exact estimation for Markov chain equilibrium expectations. J. Appl. Probab., 51(A):377–389, 2014.
  14. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings-F, 140(2):107–113, 1993.
  15. Smoothing with couplings of conditional particle filters. J. Amer. Statist. Assoc., 115(530):721–729, 2020a.
  16. Unbiased Markov chain Monte Carlo methods with couplings. J. R. Stat. Soc. Ser. B Stat. Methodol., 82(3):543–600, 2020b.
  17. A. Jasra and F. Yu. Central limit theorems for coupled particle filters. Adv. in Appl. Probab., 52:942–1001, 2020.
  18. On the forgetting of particle filters. Preprint arXiv:2309.08517, 2023.
  19. Conditional particle filters with bridge backward sampling. J. Comput. Graph. Statist., to appear.
  20. Coupled conditional backward sampling particle filter. Ann. Statist., 48(5):3066–3089, 2020.
  21. F. Lindsten and T. B. Schön. Backward simulation methods for Monte Carlo statistical inference. Found. Trends Mach. Learn., 6(1):1–143, 2013.
  22. Particle Gibbs with ancestor sampling. J. Mach. Learn. Res., 15(1):2145–2184, 2014.
  23. Uniform ergodicity of the particle Gibbs sampler. Scand. J. Stat., 42(3):775–797, 2015.
  24. J. Olsson and J. Westerborn. Efficient particle-based online smoothing in general hidden Markov models: The PaRIS algorithm. Bernoulli, 23(3):1951–1996, 2017.
  25. G. Roberts and J. Rosenthal. Geometric ergodicity and hybrid Markov chains. Electron. Commun. Probab., 2:13–25, 1997.
  26. Blocking strategies and stability of particle Gibbs samplers. Biometrika, 104(4):953–969, 2017.
  27. H. Thorisson. Coupling, Stationarity, and Regeneration. Springer, 2000.
  28. N. Whiteley. Discussion on “Particle Markov chain Monte Carlo methods”. J. R. Stat. Soc. Ser. B Stat. Methodol., 72(3):306–307, 2010.
  29. N. Whiteley. Stability properties of some particle filters. Ann. Appl. Probab., 23(6):2500–2537, 2013.
  30. D. Wilkinson. Discussion on “Unbiased Markov chain Monte Carlo methods with couplings”. J. R. Stat. Soc. Ser. B Stat. Methodol., 82(3):574–576, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com