Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Posterior convergence for approximated unknowns in non-Gaussian statistical inverse problems (1112.0906v1)

Published 5 Dec 2011 in math.ST and stat.TH

Abstract: The statistical inverse problem of estimating the probability distribution of an infinite-dimensional unknown given its noisy indirect observation is studied in the Bayesian framework. In practice, one often considers only finite-dimensional unknowns and investigates numerically their probabilities. As many unknowns are function-valued, it is of interest to know whether the estimated probabilities converge when the finite-dimensional approximations of the unknown are refined. In this work, the generalized Bayes formula is shown to be a powerful tool in the convergence studies. With the help of the generalized Bayes formula, the question of convergence of the posterior distributions is returned to the convergence of the finite-dimensional (or any other) approximations of the unknown. The approach allows many prior distributions while the restrictions are mainly for the noise model and the direct theory. Three modes of convergence of posterior distributions are considered -- weak convergence, setwise convergence and convergence in variation. The convergence of conditional mean estimates is studied. Several examples of applicable infinite-dimensional non-Gaussian noise models are provided, including a generalization of the Cameron-Martin formula for certain non-Gaussian measures. Also, the well-posedness of Bayesian statistical inverse problems is studied.

Summary

We haven't generated a summary for this paper yet.