Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Metropolising forward particle filtering backward sampling and Rao-Blackwellisation of Metropolised particle smoothers (1011.2153v1)

Published 9 Nov 2010 in stat.CO

Abstract: Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distributions. For models that are not linear Gaussian or possess finite state space, smoothing distributions are in general infeasible to compute as they involve intergrals over a space of dimensionality at least equal to the number of observations. Recent years have seen an increased interest in Monte Carlo-based methods for smoothing, often involving particle filters. One such method is to approximate filter distributions with a particle filter, and then to simulate backwards on the trellis of particles using a backward kernel. We show that by supplementing this procedure with a Metropolis-Hastings step deciding whether to accept a proposed trajectory or not, one obtains a Markov chain Monte Carlo scheme whose stationary distribution is the exact smoothing distribution. We also show that in this procedure, backward sampling can be replaced by backward smoothing, which effectively means averaging over all possible trajectories. In an example we compare these approaches to a similar one recently proposed by Andrieu, Doucet and Holenstein, and show that the new methods can be more efficient in terms of precision (inverse variance) per computation time.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube